In the previous part I’ve been trying to explain why using virtual machines is expensive and maintenance heavy. This was a bit of a burn down so lets take off to the clouds and start the real journey.

This blog will not be a good example when compared to our final target in the journey. But it gives some indication on how to split-up an application using serverless concepts.

The demo application

The application we use as an example is a simple web-based company site. The application consists of the following components:

  • Static website
  • Postgress database
  • API to retrieve articles from the database

Lets assume this was all running inside a virtual machine and we like to move it over to serverless.

The cloud architecture

When using cloud services we actually look at individual application components and map them to a service. In a lot of situations big parts of the internel software components can be split-up in smaller components connected by other services. But this is not part of this simple application. So lets take a look at the proposed design:

Application Architecture

This does not look that bad right? Lets go through the individual services:

Cloudfront

This is the CDN of AWS normally used to server static content. This is completely serverless and has some very cool advantages:

  • globally available
  • extremely fast
  • adds a lot of security
  • always free 1 TB data traffic out per month

Read the explanation on AWS Cloudfront

S3

S3 is the main storage service of AWS. This is where we store out static website components. Normally a static website is generated into html, css and js files. One of the most interesting concepts of S3 in combination with cloudfront is that the data transfer out from S3 to cloudfront is always free.

Api gateway

This is the API management server. Here you can define API’s with methods. You can easily add authentication to your API’s and one of the most common integrations is with Lambda. However, you can connect almost everything you like with the different integrations.

Lambda

This is the actual serverless compute engine. Here we put our application logic to retrieve items from our database. When you come from a running application in a virtual machine you do have to change a little bit on the input/output part. But the rest can stay the same (although this is probably not the best way forward in the long run).

RDS

RDS is the Relational Database Service from AWS. The service can “talk” in multiple database formats like MySql, Postgress, Oracle and MS Sql.

This service is highly configurable so do read up on the service when you are trying to use this.

First wins

This design will work and you will be able to get your application running inside the lambda. You still have the same database connection and you can serve your frontend through cloudfront.

This simple design will already give you the following advantages when compared to a VM based solution:

However, this solution is also probably a bit more expensive!

Let got through them

No bottlenecks

Every component in the design is scalable to serve the most heavy workloads. However for lambda and aurora RDS you might wish to configure some settings.

Lambda

You might see some delays on first page load due to lambda coldstart and concurrency. To read more about the configuration settings and get more insight go to: lambda-concurrency.

RDS

As already said this is highly configurable. In the case of a postgress we have 2 major flavors:

  1. managed instances
  2. aurora serverless

In the first case you are actually still reserving a dedicated VM. But this one is managed. You still need to invest effort to figure out all requirements and stay alert because the instance can hit bottlenecks.

The second case is more in line with our operating model. You still can tweak scaling up and reserved availability but this is a lot more maintenance friendly.

No maintenance

The biggest win here is that you can basically trust that this setup will work for years without looking at it. You do not have to be concerned about patching or keeping things alive.

This will run for years without worries

The serverless concept makes sure you do not have to worry about the correct versions of linux or all supporting software which is needed to run your runtime. This is all in the hands of the cloud provider.

You do need to check if you can run on a new runtime or move to a new database version. Also if you have software dependencies in your application code they might need (security) updates. However this is something YOU actually care about to upgrade.

Expensive!

Ok, the title of the blog is “a not so well designed cloud application”. This is primary due to the costs. Basically all of the cost are comming from our postgress database. Using a relational database in your designs means an Active component. Active components have priced per time unit.

This is probably more expensive then the original VM that was used to host the entire application

The minimum price for aurora serverless is around $0.07/hour. $0.07/hour is > $50,-/month.

You can go back to a managed database, this would end up with the minimum costs of ~$12,0/month. However, you probably hit bottlenecks here and the only thing you can do is Get a biggger instance!. RDS is famous to be the most expensive part of a cloud bill.

Conclusion

This application is mainly a way to demonstrate how you can execute a VM-exit and focus on serverless components in the cloud. Expect to have a better application in one of the upcomming blogs. But first we will dive a bit more in the serverless concept and how it works.

End note

Thanks for reading, I hope it was useful. Please drop me a note on linked in when you have additional questions or remarks.

~ Joost van der Waal (Cloud guru)