the Signal infra on AWS was evidently in exactly one part of the world.
We don’t necessarily know that. All I know is that AWS’s load balancers had issues in one region. It could be that they use that region for a critical load balancer, but they have local instances in other parts of the world to reduce latency.
I’m not talking about how Signal is currently set up (maybe it is that fragile), I’m talking about how it could be set up. If their issue is merely w/ the load balancer, they could have a bit of redundancy in the load balancer w/o making their config that much more complex.
You mean like the hours it took for Signal to recover on AWS, meanwhile it would have been minutes if it was their own infrastructure?
No, I mean if they had a proper distributed network of servers across the globe and were able to reroute traffic to other regions when one has issues, there could be minimal disruption to the service overall, with mostly local latency spikes for the impacted region.
My company uses AWS, and we had a disaster recovery mechanism almost trigger that would move our workload to a different region. The only reason we didn’t trigger it is because we only need the app to be responsive during specific work hours, and AWS recovered by the time we needed our production services available. A normal disaster recovery takes well under an hour.
With a self-hosted datacenter/server room, if there’s a disruption, there is usually no backup, so you’re out until the outage is resolved. I don’t know if Signal has disaster recovery or if they used it, I didn’t follow their end of things very closely, but it’s not difficult to do when you’re using cloud services, whereas it is difficult to do when you’re self-hosting. Colo is a bit easier since you can have hot spares in different regions/overbuild your infra so any node can go down.
It was a DNS issue with DynamoDB, the load balancer issue was a knock-on effect after the DNS issue was resolved. But the problem is it was a ~15 hour outage, and a big reason behind that was the fact that the load in that region is massive. Signal could very well have had their infrastructure in more than one availability zone but since the outage affected the entire region they are screwed.
You’re right that this can be somewhat mitigated by having infrastructure in multiple regions, but if they don’t, the reason is cost. Multi-region redundancy costs an arm and a leg. You can accomplish that same redundancy via Colo DCs for a fraction of the cost, and when you do fix the root issue, you won’t then have your load balancers fail on you because in addition to your own systems you have half the internet all trying to pass its backlog of traffic at once.
Yes, if you buy an off the shelf solution, it’ll be expensive.
I’m suggesting treating VPS instances like you would a colo setup. Let cloud providers manage the hardware, and keep the load balancing in house. For Signal, this can be as simple as client-side latency/load checks. You can still colo in locations with heavier load; that’s how some Linux distros handle repo mirrors, and it works well. Signal’s data needs should be so low that simple DB replicas should be sufficient.
We don’t necessarily know that. All I know is that AWS’s load balancers had issues in one region. It could be that they use that region for a critical load balancer, but they have local instances in other parts of the world to reduce latency.
I’m not talking about how Signal is currently set up (maybe it is that fragile), I’m talking about how it could be set up. If their issue is merely w/ the load balancer, they could have a bit of redundancy in the load balancer w/o making their config that much more complex.
No, I mean if they had a proper distributed network of servers across the globe and were able to reroute traffic to other regions when one has issues, there could be minimal disruption to the service overall, with mostly local latency spikes for the impacted region.
My company uses AWS, and we had a disaster recovery mechanism almost trigger that would move our workload to a different region. The only reason we didn’t trigger it is because we only need the app to be responsive during specific work hours, and AWS recovered by the time we needed our production services available. A normal disaster recovery takes well under an hour.
With a self-hosted datacenter/server room, if there’s a disruption, there is usually no backup, so you’re out until the outage is resolved. I don’t know if Signal has disaster recovery or if they used it, I didn’t follow their end of things very closely, but it’s not difficult to do when you’re using cloud services, whereas it is difficult to do when you’re self-hosting. Colo is a bit easier since you can have hot spares in different regions/overbuild your infra so any node can go down.
It was a DNS issue with DynamoDB, the load balancer issue was a knock-on effect after the DNS issue was resolved. But the problem is it was a ~15 hour outage, and a big reason behind that was the fact that the load in that region is massive. Signal could very well have had their infrastructure in more than one availability zone but since the outage affected the entire region they are screwed.
You’re right that this can be somewhat mitigated by having infrastructure in multiple regions, but if they don’t, the reason is cost. Multi-region redundancy costs an arm and a leg. You can accomplish that same redundancy via Colo DCs for a fraction of the cost, and when you do fix the root issue, you won’t then have your load balancers fail on you because in addition to your own systems you have half the internet all trying to pass its backlog of traffic at once.
Yes, if you buy an off the shelf solution, it’ll be expensive.
I’m suggesting treating VPS instances like you would a colo setup. Let cloud providers manage the hardware, and keep the load balancing in house. For Signal, this can be as simple as client-side latency/load checks. You can still colo in locations with heavier load; that’s how some Linux distros handle repo mirrors, and it works well. Signal’s data needs should be so low that simple DB replicas should be sufficient.