I was reading Understanding Distributed Systems. This is pretty short book which goes through lot of concepts needed to understand the complexity of developing a distributed system. I highly recommend this book to every engineer just to know the character of distributed systems topic.

One of the chapters suggested a reading of Back of The Envelope Estimation Hacks. I learned about the term "back of the envelope estimation" a while back. You can read more here.

Today I learned about few useful hacks you can use to estimate staff during system design:

  1. Know your numbers. It's about knowing the relations between common operations, like accessing a register vs accessing a main memory. You can find more details in the linked TIL above.

  2. Approximate with powers. It's a simple hack which makes calculation easier. Basically when you calculate something, just pick a power of 10 which is closes the number you are using. This way you will calculate the result quickly and the accuracy is not important at this point. For example, we have 800K users watching video at 12Mbps. We have machines with egress of 1Gbps. How many machines do we need to serve this content?

    800K users ~ 10^6 users
    12Mbs      ~ 10^7bps
    1Gbps      ~ 10^9bps
    (10^6 * 10^7) / 10^9 = 10^13 / 10^9 = 10^4 = 10K [machines]

    Thus, we need 10K machines to serve this traffic.

  3. The rule of 72. It allows estimating how much time is required to double something on your system, like how much time it will take to double the traffic on your system. For example, we have 10% increase in the traffic every week. How much time it will take to double the traffic? We can use this simple rule to estimate that:

    time = 72 / rate => time = 72 / 10 %/week ~ 7 weeks

    So it will take approximately 7 weeks to double the traffic.

  4. Little's Law. This one is about treating a system as a queue. A lot of elements in system design can be treated as a queue which has some processing time (W), some rate at which elements appear (λ) and the capacity of the queue (L). For example when we need to estimate how many requests can be processed at the same time, we can use this law. Imagine we have a service which is able to process a request in 100ms, and currently we are receiving 2 million requests per second. How many requests can be processed at the same time?

    L = λ * W => L = 0.1 * 2M = 200K

    So we can have 200K requests being processed at the same time. Assuming we have 8 core CPUs and one request is served by one thread we need 25K machines.