Skip to Content

As I do more and more in the cloud, and people start to prick up their ears about it. We keep coming back to Rapid prototyping, which is a fancy way of telling a Basis person “We need you to work really hard and pull together a landscape of these components in parallel”

The next conversation is about what data do people want in their prototyping environment, for some people vanilla is good enough, for others IDES ticks the boxes, but for others you just cannot beat customer data. Especially for blueprints, customers love data they can relate to – their own!

Now we’re into a whole other realm, because the Cloud Instances have now just gone from theoretical to operational, and all the hassles that come with it.

Top of that list is Security– people do not trust the Cloud yet, and with good reason – they do not know whom to blame for any data breaches. If you wanted to be uncharitable, think of the AWS Cloud as RyanAir, They’ll get you there, but remember what you paid to get there before you complain.

***Update***

There has been an independent review of Security in the Cloud, found here, which states that the AWS Cloud has passed the SAS 70 Type II audit. So that should settle some security nerves

AWS Security

Security in the cloud is multi-layered – some of which we have encountered before and some we have not.

Starting at the lowest level – the AWS account has two keys associated with it, a Public and a Private Key (Secret Key). These keys form the basis of the administrative functions with AWS, with these keys you can order new services. So it is vital to keep these keys within a very small group of trusted administrators. This link details how these keys are used

The next level up is the Windows Operating system security – there is no difference here than a normal instance. Accounts and password policies should be created in line with best-practice or client requirements, particular attention should be paid to Compliance requirements. Although these servers are effectively visible from the Internet due to the use of Public IP addresses and DNS entries, there are strict firewall rules, which I will go into more detail in this section.

Finally the AWS firewall protects every instance started within the AWS cloud, it is centred around the concept of Security Groups and the services available to be used by each group. For people used to object/group level privilege assignment, it should be an familiar concept. More details can be found here.

Administration of the Security groups can be done from either ElasticFox, AWS Console, if you look at the screenshots below.

ElasticFox

An example of starting up an Instance and being able to assign the security groups at startup

AWS Console

Application & Database Security

So now that we have secured the infrastructure for our instances, we now have to make sure that the application and database layers are protected.

Using the AWS Group policies we can restrict access to port ranges to specific IP address ranges, which further restrict access.

SAP related traffic

We need to pay attention to the traffic being sent to and from the Cloud, within SAP applications we have two main sources of traffic – SAPGui and HTTP traffic.

Both of which are intercept-able once they leave the outer boundary of your company firewall, so we need to take measures to ensure that they are encrypted.

The most common way to encrpyt SAPGui traffic is to use the SNC crypto-graphic libraries, this encrypts the traffic using an SAP standard algorithm. It is supported by SAP and works seamlessly with the application.

It does impose a small overhead on each side of the traffic flow to encode and decode the traffic, but I think this is a small price to pay for security.

Standard HTTP traffic is very easily ‘sniffed’ from network traffic, the most common way to secure HTTP communication is to use SSL. Again this is completely transparent to the application and is supported by SAP.

Only thing to be aware of, is that SSL certificates are valid against domains – so because SSL certs can be expensive it might be better to use a company or bought domain (rather than a DYNDNS.org)

Of course you could just keep getting trial certificates from SAP, if you are just testing a Cloud system.

If you look at the diagram below it shows the traffic and ways to secure it

If you are eagle-eyed and quick of thought, you might be asking “Traffic to and from the Cloud is fine, but what about traffic within the Cloud”

Communication between instances within the cloud is like communicating within a DMZ, the servers are yours and they are subject to your firewall rules but you don’t quite know who else is in there with you.

The problem lies with the dynamically assigned private IP addresses, whereas we are using known Elastic IPs to provide public access, we would seek to use internal Cloud IPs to try and reduce the profile of sniffable traffic.

At present I do not have a complete answer on what the best way to work this, but my immediate thoughts are

1. Allocate a new set of Elastic IPs for internal use, then use a Software VPN connection, something like Hamachi. This would provide a software encryption between the instances, but would it scale- only a PVT will draw that out

2. Use SNC and HTTPS between instances with the already public ElasticIP addresses, or allocate new ElasticIPs and do not register the DNS name.

Database Traffic

I do not know enough about all the database management tools to state whether they encrypt their traffic to and from their end-point – for that reason I do not advise running them outside of the Cloud to access resources in the cloud.

If like me you are running a Windows server in the Cloud, deploy all your database management tools on there and run as much locally as possible.

Administration considerations

Although I have not deployed this architecture operationally I think the use of a Jump-Off server is vital for the following reasons

1. Using the Jump-Off server, you can restrict access to tools like ElasticFox by installing  them on this server. This means that your administration teams can control the instances, but not have direct access on their desktop to the Private AWS Key.

2. Database management tools can be deployed and access instances using their dynamic internal IP Addresses, enabling your admin teams to work with databases but keeping the communications internal to the cloud.

3. Ability to run monitoring or control scripts from this instance.

Well I think that is enough for now on security – of course there are volumes that you can write about this stuff, but I am not a security person.

There will also be other posts on this subject, for now though I think I have bored you enough.

To report this post you need to login first.

Be the first to leave a comment

You must be Logged on to comment or reply to a post.

Leave a Reply