Yeah I call bullshit on that. I get why they’re investing money in it, but this is a moonshot and I’m sure they don’t expect it to succeed.
These data centers can be built almost anywhere in the world. And there are places with very predictable weather patterns making solar/wind/hydro/etc extremely cheap compared to nuclear.
Nuclear power is so expensive, that it makes far more sense to build an entire solar farm and an entire wind farm, both capable of providing enough power to run the data center on their own in overcast conditions or moderate wind.
If you pick a good location, that’s lkely to work out to running off your own power 95% of the time and selling power to the grid something like 75% of the time. The 5% when you can’t run off your own power… no wind at night is rare in a good location and almost unheard of in thick cloud cover, well you’d just draw power from the grid. Power produced by other data centers that are producing excess solar or wind power right now.
In the extremely rare disruption where power wouldn’t be available even from the grid… then you just shift your workload to another continent for an hour or so. Hardly anyone would notice an extra tenth of a second of latency.
Maybe I’m wrong and nuclear power will be 10x cheaper one day. But so far it’s heading the other direction, about 10x more expensive than it was just a decade ago, thanks to incidents like Fukushima and that tiny radioactive capsule lost in Western Australia proving current nuclear safety standards, even in some of the safest countries in the world, are just not good enough. Forcing the industry to take additional measures (additional costs) going forward.
IMHO, data centers kind of need to be somewhat close to important population areas in order to ensure low latency.
You need a spot with attainable land, room to scale, close proximity to users, and decent infrastructure for power / connectivity. You can’t actually plop something out in the middle of BFE.
I remember reading a story about an email server that was limited to sending emails within 150 miles. Through a lot of digging, they found it was due to an auto-timeout timer getting reset to 0ms. Anything further than 150 miles would cause a 1ms delay and thus get rejected for taking too long.
For the majority of applications you need data centers for, latency just doesn’t matter. Bandwidth, storage space, and energy costs for example are all generally far more important.
need to be somewhat close to important population areas
They really don’t. I live in regional Australia - the nearest data center is 1300 miles away. It’s perfectly fine. I work in tech and we had a small data center (50 servers) in our office with a data center grade fibre link - got rid of it because it was a waste of money. Even comparing 1300 miles of latency to 20 feet of latency wasn’t worth it.
To be clear, having 0.1ms of latency was noticeable for some things. But nothing that really matters. And certainly not AI where you’re often waiting 5 seconds or even a full minute.
Yeah I call bullshit on that. I get why they’re investing money in it, but this is a moonshot and I’m sure they don’t expect it to succeed.
These data centers can be built almost anywhere in the world. And there are places with very predictable weather patterns making solar/wind/hydro/etc extremely cheap compared to nuclear.
Nuclear power is so expensive, that it makes far more sense to build an entire solar farm and an entire wind farm, both capable of providing enough power to run the data center on their own in overcast conditions or moderate wind.
If you pick a good location, that’s lkely to work out to running off your own power 95% of the time and selling power to the grid something like 75% of the time. The 5% when you can’t run off your own power… no wind at night is rare in a good location and almost unheard of in thick cloud cover, well you’d just draw power from the grid. Power produced by other data centers that are producing excess solar or wind power right now.
In the extremely rare disruption where power wouldn’t be available even from the grid… then you just shift your workload to another continent for an hour or so. Hardly anyone would notice an extra tenth of a second of latency.
Maybe I’m wrong and nuclear power will be 10x cheaper one day. But so far it’s heading the other direction, about 10x more expensive than it was just a decade ago, thanks to incidents like Fukushima and that tiny radioactive capsule lost in Western Australia proving current nuclear safety standards, even in some of the safest countries in the world, are just not good enough. Forcing the industry to take additional measures (additional costs) going forward.
IMHO, data centers kind of need to be somewhat close to important population areas in order to ensure low latency.
You need a spot with attainable land, room to scale, close proximity to users, and decent infrastructure for power / connectivity. You can’t actually plop something out in the middle of BFE.
I remember reading a story about an email server that was limited to sending emails within 150 miles. Through a lot of digging, they found it was due to an auto-timeout timer getting reset to 0ms. Anything further than 150 miles would cause a 1ms delay and thus get rejected for taking too long.
In case anyone wants to read that: https://www.ibiblio.org/harris/500milemail.html
For the majority of applications you need data centers for, latency just doesn’t matter. Bandwidth, storage space, and energy costs for example are all generally far more important.
They really don’t. I live in regional Australia - the nearest data center is 1300 miles away. It’s perfectly fine. I work in tech and we had a small data center (50 servers) in our office with a data center grade fibre link - got rid of it because it was a waste of money. Even comparing 1300 miles of latency to 20 feet of latency wasn’t worth it.
To be clear, having 0.1ms of latency was noticeable for some things. But nothing that really matters. And certainly not AI where you’re often waiting 5 seconds or even a full minute.