Michał Brygidyn is an AWS DevOps engineer with AWS Certified Security and a security researcher with CompTIA Security+ certification.

In this post on the PGS Software Blog, he shares some interesting tips about Cloud Security on AWS.

...the Cloud is safe.
The big platform leaders, namely Amazon Web Services (AWS), Google Cloud Platform (GCP) and Microsoft Azure, all make great efforts to stay secure and meet various levels of certification. The problem often lies in the components and solutions built within.

12 tips

Before we begin, I want to explain that we’re talking primarily about AWS security issues here, but many of these apply to any Cloud platform, be it Azure or Google Cloud Platform. The technologies and individual solutions may change, but the overall principles do not! 

1. Avoid plaintext credentials in Environmental Variables

If your application has debug active and encountered an error that prints a stack trace, as well as environment variables, on the screen, then your security is compromised. Start changing your passwords, rotate your keys and invalidate sessions.

Why? Because such information can be shown during these errors thanks to the debug mode, which gives unauthorised people the potential to access what they shouldn’t be able to access. All it takes is a second – this is all external scanners need to copy your information and store it elsewhere. If your secrets got indexed in this way and you didn’t make any changes, such people could have access for months.
Also, as we’ll cover later, you should never use production credentials or accounts on non-production environments – especially those with debug enabled.

How to avoid this?
Start by hiding your non-production environments behind a VPN to limit access. You can also keep your credentials in a vault service – such as AWS Parameter Store, AWS Secrets Manager or HashiCorp Vault – and extract them dynamically.

2. Keep private S3 buckets

...it’s always important to ensure your various buckets are private and public as needed. This one might seem simple, but it’s important. Public access should be very limited – anything you want to stay secure and private should be set as such.
How to avoid this?

Simply enough, this involves ensuring every bucket is configured correctly. However, this is a problem of scale more than anything else – the more buckets and larger services you have, the more work it takes to check and ensure each is properly assigned.

3. Don’t give more permissions than necessary

Giving wider permissions is a common issue in many companies – often because it’s easier to configure and ensures everyone can access what they need – but it brings a whole host of risks with it.
With unregulated access in one area, users can quickly gain access elsewhere, make changes where they are not supposed to – or even welcome to – and gain access to information and processes elsewhere in the system.

How to avoid this?

If you have a lot of people, it can be difficult to assign and regulate so many roles and their respective permissions. In fact, this is where a lot of the problems start, as companies give people blanket-wide permissions to save time. Instead, consider using groups and predefined roles with specific permissions for different teams and users.


4. Maintain your software updated

Now, this is an issue that has been around long before the Cloud – it’s held true for on-site software, networks and essentially any digital solution. So, it shouldn’t come as a surprise to learn this is one of the most basic Cloud security risk management practices – yet also one of the most overlooked.
Of course, we mentioned earlier that the Cloud itself is very secure. Yet, while AWS certainly patches its own managed services and hosted operating systems, it’s up to individual clients regarding anything running in the virtual machines.
Ignoring updates is a fool’s gambit: these updates are created as possible exploits are found. These often become public knowledge and, if you’re not running the updates, it makes your service a target for malicious activity.

How to avoid this?

This is another simple fix – update everything on the Cloud. Likewise, ensure your IT team is always up to date with all official patches and updates. For example, the latest AWS security patches and issues are often updated regularly. Their bulletin page covers the latest common vulnerabilities and exposures (CVE) issues – so make sure your IT team is checking this on a daily basis!

5. Never use shared keys

Shared keys are a nightmare for investigations. When you know more than one person has access to a certain key, it’s no easy talk to determine responsibility. Singular keys for each individual, on the other hand, is much easier.
It’s also safer, too. Shared keys often result in former team members retaining access to your machine, even when they’ve left the company, which is a far cry from secure. Shared keys are also much easier to leak, as people have to give them new team members etc. A singular key has no reason to be shared so often. It’s not sent out via email, or written on a piece of paper!

How to solve this?

Never use shared keys! Always use keys that are assigned to particular individuals. Furthermore, if you are using shared keys, disable them and assign individual access right now.

6. Rotate your keys regularly

Even if you have individual keys, they still need to be rotated on a regular basis. A key can still leak, so the longer your keys are active, the more risk you are building up.
Yet a key rotation also helps encourage your teams to think differently. Rather than hardcoding their own access keys in various applications, they will be using less permanent, rotating solutions that are harder to crack.


7. Use roles instead of Access Keys

When they exist as plaintext access keys, they can be easily copied and acquired. So, what if we removed this risk entirely?

Define roles, when paired with temporary and short-lived credentials from AWS Security Token Service (STS), serve the same function but are significantly safer. Similar to rotating keys, the temporary nature of STS ensures that old logs or records can’t be turned into potential exploits.

8. Use billing alerts

Billing Alerts are what Cloud providers like AWS use to help you stay notified on key developments.

If someone gained access and redistributed your resource usage, this is what will help you know something is wrong.

9. Use multi-factor authentication

Multi-factor authentication (MFA) is common in today’s world. Most applications use it, including numerous banks, so why wouldn’t you use it for your Cloud.
Singular-factor authentication is a clear risk – it’s part of the reason why keys are so problematic. By using MFA on all of your AWS Identity and Access Management (IAM) policies, for every API call, you can keep your resources safe, even if an access key gets leaked. Without every piece, nobody is getting in without approved access.

10. Remove root user Access Keys

Because root users have access to the entire account, including resources and data, it should absolutely not.
Instead, use IAM users. Even with admin permissions, these can not take account management actions. So, even if someone did gain access to such an account, the extent of any possible damage is much more limited.

11. Use Separate Accounts for Production and Development Environments

Typically, development and production environments are kept clearly separate but, in the Cloud, they often share the same accounts out of ease. Similar to shared keys and unnecessary privileges, this introduces a whole host of problems.

(see the original post for some ideal scenarios)

12. Block public access to services like Elasticsearch, Jenkins etc...

Management tools and other managed services, such as Jenkins or ElasticSearch, can make the Cloud even easier to configure but it’s vital that the access to these tools is kept secure – in part because they often rely on standard ports. This is why we recommend that you limit this exclusively to your office(s) IP address only.


So, how do you know if you’re safe?

The simple truth is that, unless you take the time to look, you don’t know if you’re safe. If you built your solutions carefully, avoided key exploits and left no backdoors available, then you have kept yourself safe. The real question is – have you tested your infrastructure to prove this?