Startup Architecture: Scale up LAMP architecture using HAProxy, PHP, Redis, Memcache and MySQL to Handle 1 Billion Requests A Week!!

Startup Architecture: Scale up LAMP architecture using HAProxy, PHP, Redis, Memcache and MySQL to Handle 1 Billion Requests A Week!!

Now mostly startup companies are using LAMP architecture to build product.  We can create quite simple architecture based on HAProxy, PHP, Redis, Memcachce and MySQL that seamlessly handles approx 1 billion requests every week . 

Stats:

  • Servers:

    • 3x application nodes

    • 2x MySQL + 1x for backup

    • 2x memcache nodes
    • 2x Redis

  • Application:

    • Application handles 1,000,000,000 requests every week

    • Average response time - 30 milliseconds

    • Varnish - more than 12,000 req/s (achieved during stress test)

  • Data store:

    • Redis - 160,000,000 records, 100 GB of data (our primary data store!),

    • MySQL - 300,000,000 records - 300 GB (third cache layer)

  • Logical Architecture:
  • Application Architecture:

    Scalability

    Database is always the hardest bottleneck of an application. Currently, there weren’t needed any scaling-out operations - to this time we’re scaling vertically by moving our Redis and MySQL to bigger boxes. There’s still a space for it as e.g. Redis is running on a server with 128 GB memory - it’s possible to migrate them to nodes with 256 GB. Of course such heavy boxes also come with disadvantages in operations like snapshot or just running up the server - it’ll take much longer to start up Redis server.

    After scaling-up (vertically), goes scaling-out (horizontally). Happily, we’ve prepared easy to shard structure of our data:

    We’ve got 4 “heavy” types of records in Redis. They can be sharded on 4 servers based on data type. We’d avoid partitioning based on hashing in favor to dividing the data by a records type. That way we’ll be still able to run MGET which is always performed on an one type of keys.

    In a MySQL, tables are structured the way, that let easy migration of some of them to the different server - also basing on the records type (tables).

    After exceeding possibility of further partitioning by data types, we’ll go into hashing :-)

To view or add a comment, sign in

More articles by Vivek Ojha

Others also viewed

Explore content categories