Basic scalable architecture with Rails and Amazon AWS cloud

Well you can use elastic beanstalk, which will do auto scaling and all the magic you need with just couple of clicks. But I am going to list down a very basic scalable architecture (on demand scaling) that can be created using rails and AWS cloud. This going to helpful for people who just needs to understand the basic concept and tools they can use. Strictly for this post, scalability means more resource addition to the hardware on need basis resulting in better performance.

The purpose of the post is to elaborate a basic coding architecture and hardware setup, after which you just need to add more resources (no code changes required) when there is spike in users.

Database Server: We will create 2 database servers. First will be our Master database and all write (insert,update) operations will be performed on it. The second database server will be Slave and will allow only read operations. You can achieve this by going to amazon console for Amazon RDS, creating a database server of your choice. You can go to its settings and create a Read Replica of the server. You don’t have to do anything else, Amazon RDS will handle replication for you.

Rails Database Code: To work with a Master/Slave environment (many other options), you do not have to write code any differently then you normally do. You should use Octopus gem that will handle everything for you. Just read how to configure it and thats all you need. It will work fine with the database servers we created in the above step.

Rails Application Code: Nothing special is needed here, just write the code you normally do.

Rails UI Code: Nothing Special needed here, write the code as you do normally.

Web Server: Create a web server using EC2 server of you choice. You can setup the rails and all your project dependencies on it or use a preexisting stack like Bitnami. Setup your code and test it on the server. If everything goes fine, you can create a snapshot of the EC2 instance and use it to launch a EC2 instance. This is exact copy of our web server and has application running on it. Now we have 2 web servers and we can launch as many as we want.

Load Balancer: Now we need to add a load balancer so the traffic can be re-routed to our web servers. From Amazon console you can create a load balancer and add the existing EC2 instances (web servers) to it.  You can test it by the public DNS of the load balancer and will see traffic getting sent to both of our web servers.

Domain name: Last step is to configure your domain so that the traffic is sent to your load balancer against your domain. You can use Route53 of amazon AWS to do that. Its very self explanatory.

Now we have successfully setup a scalable architecture. This can take care of your needs when there is a spike in users. You can launch as many web servers as you need and add it to your load balancer. You can launch as many database read replicas as you need and just configure them in octopus yml file.

In next post we will talk about using backend jobs and how they can be handled in a scalable architecture. Hope you liked the article.. 🙂

You may also like...