Published on

Athoni: Our Architecture on AWS - How to Auto Scale In/Out Services and Optimize Infrastructure Costs?

Authors
  • avatar
    Name
    Jack Nguyen
    Twitter
Table of Contents

Athoni on EKS: The Technical Aspects

Due to security concerns, I cannot provide more details about this architecture. However, I can give you a high-level overview of the system.

Cover Image

Of course, there are many services that are not listed here in our actual system.

It's pretty much a setup for our game match-making system. (Read here)

How to auto scale in/out your services in EKS + optimize infrastructure costs?

Cover Image

Video Demo Scale In/Out (Load Test by Locust Cluster)

Disclaimer: Due to security restrictions, I cannot show the actual system. Instead, I'll demonstrate the auto-scaling process using a simplified version based on another project (FindYourJob.tech).

Steps to reproduce (My Video):

  1. Setup a Locust Cluster with 2 Locust Workers and 1 Locust Master. (on another EKS cluster)
  2. Run Locust Cluster to simulate 300 users accessing the targeted website with 5 users per second.
  3. Watch the auto-scaling process. (Using monitoring tools)

Sorry for the loss of audio in the video.

Scale Out (Auto):


Scale In (Auto):

CI/CD Pipeline

Cover Image

It's pretty easy to understand, right?

Step-by-step (Deployment Process)

  1. Developer pushes code to GitHub.
  2. GitHub triggers a webhook to Github Actions.
  3. A self-hosted runner (on EC2) builds the Docker image and pushes it to Google Container Registry (GCR).
  4. Github Actions, with access to the EKS cluster, instructs EKS to update the deployment with the new image.
  5. EKS pulls the updated image from GCR and updates the running service.