I declare all resources mine purchased with a fancy loan. Now that all resources are mine, they are all worth 100,000 times more then before. Dont worry, if you cant afford to pay 100,000x more you can rent some if my stuff! Also now that I own everything, I’m To Big To Fail and will need a bailout when I cant pay my fancy loan.
This is the healthiest, most efficient economy possible. Aliens visit earth and you want to know why? To study our highly advanced economic system of course!
I really hope this is a temporary supply bottleneck. I understand the constraints of producing chips and highly specialized hardware but AI demand is only going to go up from here.
I’m optimistic a game changer gets whipped out of thin air
What’s next, PCB producers?
I’m afraid that a lot of the infrastructure will be heavily catered towards DoD computing resources. This means after the components hit their lifecycle, they aren’t released to the used markets on ebay, instead they are shredded and rendered electronic waste.
All of those GPUs will be irrelevent in 24 months, and almost all of them are useless to consumers.
Its by design, its intentional.
They want you hooked to their cloud teat.
A lot of scientists, tinkerers, 3D renderers and such would love cheap A100s and up.
On the contrary, I don’t think they will get cheaper. Somehow they’ll get bought back and trashed (like Nvidia has done in the past), hoarded, tasked with busywork, something that that.
3 months ago, watching ram prices skyrocketing, anticipating this exact scenario would happen, i bought 5 10tb drives.
best decision i’ve made in a while.
Nice. I got three 14TBs around the same time for the same reason glad I did.
I’m so fucking over this bullshit.
Bullshit. I call 100% bullshit.
Wdym? Do you believe the manufacturers would try to congincr you they’re out of stock to create scarcity and increace prices?!? Do you jnow how silly that idea is?! \s
Sort of, there used to be way more HDD manufacturers and then they all talked each other into dropping them for SDDs. Now a sudden need arises and there are no HDDs.
Those datacenters are real. AI companies aren’t using their money to build empty buildings. They’re buying enormous amounts of computer hardware off the market to fill them.
https://blogs.microsoft.com/blog/2025/09/18/inside-the-worlds-most-powerful-ai-datacenter/
Today in Wisconsin we introduced Fairwater, our newest US AI datacenter, the largest and most sophisticated AI factory we’ve built yet. In addition to our Fairwater datacenter in Wisconsin, we also have multiple identical Fairwater datacenters under construction in other locations across the US.
These AI datacenters are significant capital projects, representing tens of billions of dollars of investments and hundreds of thousands of cutting-edge AI chips, and will seamlessly connect with our global Microsoft Cloud of over 400 datacenters in 70 regions around the world. Through innovation that can enable us to link these AI datacenters in a distributed network, we multiply the efficiency and compute in an exponential way to further democratize access to AI services globally.
An AI datacenter is a unique, purpose-built facility designed specifically for AI training as well as running large-scale artificial intelligence models and applications. Microsoft’s AI datacenters power OpenAI, Microsoft AI, our Copilot capabilities and many more leading AI workloads.
The new Fairwater AI datacenter in Wisconsin stands as a remarkable feat of engineering, covering 315 acres and housing three massive buildings with a combined 1.2 million square feet under roofs. Constructing this facility required 46.6 miles of deep foundation piles, 26.5 million pounds of structural steel, 120 miles of medium-voltage underground cable and 72.6 miles of mechanical piping.
Unlike typical cloud datacenters, which are optimized to run many smaller, independent workloads such as hosting websites, email or business applications, this datacenter is built to work as one massive AI supercomputer using a single flat networking interconnecting hundreds of thousands of the latest NVIDIA GPUs. In fact, it will deliver 10X the performance of the world’s fastest supercomputer today, enabling AI training and inference workloads at a level never before seen.
Hard drives haven’t been impacted nearly much as memory, which is the real bottleneck, but when just one AI company, OpenAI, rolls up and buys 40% of global memory production capacity’s output, it’d be extremely unlikely that we wouldn’t see memory shortages for at least a while, since it takes years to build new production capacity. And then you have other AI companies who want memory. And purchases of memory from companies who are, as a one-off, extending their PC upgrade cycle, due to the current shortage who will also be competing for supply. If you have less supply relative to demand of a product, price goes up to the new point where the available amount of memory people are willing to buy at that new price point matches what’s actually available. Everyone else gets priced out. And it won’t be until either demand drops (which is what people talking about a ‘bubble popping’ are thinking might occur, if the AI-infrastructure-building effort stops sooner than expected), or enough new production capacity comes online to provide enough supply, that that’ll change. Memory manufacturers are building new factories and expanding existing ones, and we’ve had articles about that. But it takes years to do that.
25% of the datacenters being constructed right now will go bankrupt.
The majority of this AI surge is for datacenters that neither have power nor water.
Its all gonna end up being shredded, if it exists at all.
It is not this case, I agree, but to be honest it would not be the first time that some company create an artificial scarcity to keep the prices up.
Can’t wait for the bubble to pop and the used SAS HDD market to overflow with cheap hardware. Same with RAM.
Same with RAM.
Unfortunately, the RAM shortage is caused by a RAM component being diverted to specialized packages that can’t easily be converted into normal RAM. So even a bubble bursting won’t bring RAM onto the market.
My next computer is probably gonna be running ecc ram because of this concern.
I don’t know if you’re saying this, so my apologies if I’m misunderstanding what you’re saying, but this isn’t principally ECC DIMMs that are being produced.
I suppose that a small portion of AI-related sales might go to ECC DDR5 DIMMs, because some of that hardware will probably use it, but what they’re really going to be using in bulk is high-bandwidth-memory (HBM), which is going to be non-modular, connected directly to the parallel compute hardware.
HBM achieves higher bandwidth than DDR4 or GDDR5 while using less power, and in a substantially smaller form factor.[13] This is achieved by stacking up to eight DRAM dies and an optional base die which can include buffer circuitry and test logic.[14] The stack is often connected to the memory controller on a GPU or CPU through a substrate, such as a silicon interposer.[15][16] Alternatively, the memory die could be stacked directly on the CPU or GPU chip. Within the stack the dies are vertically interconnected by through-silicon vias (TSVs) and microbumps. The HBM technology is similar in principle but incompatible with the Hybrid Memory Cube (HMC) interface developed by Micron Technology.[17]
The HBM memory bus is very wide in comparison to other DRAM memories such as DDR4 or GDDR5. An HBM stack of four DRAM dies (4‑Hi) has two 128‑bit channels per die for a total of 8 channels and a width of 1024 bits in total. A graphics card/GPU with four 4‑Hi HBM stacks would therefore have a memory bus with a width of 4096 bits. In comparison, the bus width of GDDR memories is 32 bits, with 16 channels for a graphics card with a 512‑bit memory interface.[18] HBM supports up to 4 GB per package.
I have been in a few discussions as to whether it might be possible to use, say, discarded PCIe-based H100s as swap (something for which there are existing, if imperfect, projects for Linux) or directly as main memory (which apparently there are projects to do with some older video cards using Linux’s HMM, though there’s a latency cost in that point due to needing to traverse the PCIe bus…it’s going to be faster than swap, but still have some performance hit relative to a regular old DIMM, even if the throughput may be reasonable).
It’s also possible that one could use the hardware as parallel compute hardware, I guess, but the power and cooling demands will probably be problematic for many home users.
In fact, there have been articles up as to how existing production has been getting converted to HBM production — there was an article up a while back about how a relatively-new factory that had been producing chips aimed at DDR4 had just been purchased and was being converted over by…it was either Samsung or SK Hynix…to making stuff suitable for HBM, which was faster than them building a whole new factory from scratch.
It’s possible that there may be economies of scale that will reduce the price of future hardware, if AI-based demand is sustained (instead of just principally being part of a one-off buildout) and some fixed costs of memory chip production are mostly paid by AI users, where before users of DIMMs had to pay them. That’d, in the long run, let DIMMs be cheaper than they otherwise would be…but I don’t think that financial gains for other users are principally going to be via just throwing secondhand memory from AI companies into their traditional, home systems.
Ah, thanks for the information. I was already aware most of it was going to GPU type hardware. I just naturally assumed all those gpus need servers with lots of ram.
AI doesn’t necessarily use ddr tho, they stick to HBM which is a different thing entirely.
Yes, I understand that.
AMD platforms with ecc support could be insanely valuable in the future.
Please pop, PLEASE POP
I guess my combined 12TB across five drives ranging in age from 13 to six years old will have to suffice. The only reason I’d need to buy a new drive is if a couple of my current drives die. Which does happen on occasion, of course.
Also, fuck AI, and the assholes who made it, and everyone who currently, personally profits off it. This bubble popping will be the catalyst to take down the entire world economy. MMW.
Yeah fortunately mine are all in RAID arrays, hopefully none die in the next year or I may have to run degraded.
Just in case: https://serverpartdeals.com/ Still the same sort of prices you expect, but decent warranties on re-certified enterprise HDDs.
Oddly, I’ve never had an HDD or SSD ever die on me. I’ve got old ass ones that aren’t even a GB that I’ve torn apart and thrown away. My oldest SSD just got removed and put in a cabinet because 256gb is just too small.
glad i kept all the ones pulled from previous ssd upgrades and ewaste that went through here. i have several i have yet to reuse.
the shit-tier shingled ones i got a couple years ago to store media files had been relatively stable for years on price at ~ 100-110usd. they’re now 170+
SSDs, microSD cards, and now HDDs? They’re really pushing it.
Don’t forget RAM.
I just have bouth 12TB WD off their site last month. Checked right now - Sales Inquiry instead of Add to cart. Rip…
They could garner good will by setting aside a % of their stock to sell to red-blooded people at a lower price…
If someone walks into a grocery store before a storm and wants to buy 10 pallets of water, the store tells them to fuck off.
Then scalpers would buy them and jack the price up.
limit to one per customer per day like most tcg sellers do with pokemon and magic.
I’m sure there’s ways around that. Different cards, PO boxes, email addresses, names. Even if they had only 4 ways of buying that’s still almost 30 buys a week times however many scalpers there are.
obviously there will be a handful of people pulling that shit but every system basically assumes that 10% of the people using the system will use it unethically to their advantage. just balance around that, as the vast majority aren’t exploitive scumbags.
You’re asking a lot of an already uncaring system. 10% can do plenty of damage.
i’m not asking anything, i’m saying every system assumes 10% scumbaggery. its capitalism, baby!
That’s because they’re guaranteed to sell all the water when there’s a storm anyway. There’s a reason there’s laws against raising prices in an emergency.
I consider not having access to reasonably-priced hardware an emergency ;(
That’s 1 day. Guaranteed if someone walked in and said “I want to buy all the water you can sell for the next 9 months”, they’d be singing a very different tune.
When Trump threatened tariffs I went ahead and bought 50 TB of storage. With my then expansion it would easily last me until the end of Trump’s turn and maybe a decade if I rationed.
Turns out that was one of my best calls of judgements to date, just not for the reason I thought.
I bought 10kg of Playadito Yerba Mate at the beginning of 2025, should also have thought about storage, now I have to start cleaning up.
Never thought I would see Playadito being mentioned here, but nice to see another fellow mate drinker.
Wait til all these projects crash, burn, and get liquidated. Gonna be an amazing secondary market for brand new, unused bulk hardware.
Not really. They’re not making consumer grade stuff, they’re making hardware for data centers so unless you’re planning on doing a DIY data center you’re not buying the hardware. Hard drives are likely an exception.
You’re more likely to see cheap VPS services than cheap secondhand hardware.
I’ve watched enough Bringus to know that anything can be used for gaming if you’re stubborn enough.
Oh absolutely, but I doubt anyone is paying the equivalent of a 5090 to get the performance of a 3060. Server GPU-s aren’t optimized for gaming.
Sure but that will only be in the immediate, especially as the manufacturers rush to trying to produce consumer and industry shit once the AI cow goes bust. There’ll be an immediate rush of these things being sold 5090 prices only to drop down to 1090 or lower prices once they start liquidating stock to write off and the scrappers start selling these things for pennies on the dollar.
I don’t know many people buy used server and JBODs. I wouldn’t say that consumers don’t buy them.
Being in the self-hosted community I know people buy used enterprise servers to set up their own services, but consumers who buy enterprise servers probably make up less than 1% of all the consumers who buy hardware.
deleted by creator
But you won’t be able to afford it because the market crash means you lose your job.
I think people don’t realise that if AI fails, it’s pretty much guaranteed to collapse the US economy.
I’ve lived though several “one in a live time” crashes.
Fine.
The market is a joke, needs massive corrections.
Good luck everyone.
Don’t you worry it’s gonna have a global impact againjudt like it did in 08. Imagine losing your job in Italy for instance cause some bankers got ultra rich in the US. What a dumb fucking world.
Ha! Jokes on you. My state is so poor it was the least affected by the 2008 crisis. Wait, that still sucks, only more…
I don’t think 2008 really had a significant effect in Australia. I don’t remember hearing much about it.
It had a huge impact though stimulus packages that Labor created under Kevin Rudd meant that Australia had the best recovery out of all the OECD nations.
Labor
I would have bet that the Australian English spelling would be like the British English spelling, since Australian English tends towards the British English end of the spectrum rather than the American English. Especially since names tend to persist, and it’s probably been around for a while.
goes to check Wikipedia to see whether it was renamed
Interesting. Not exactly. The article uses “labour”, and has a section dealing specifically with this:
https://en.wikipedia.org/wiki/Australian_Labor_Party
In standard Australian English, the word labour is spelt with a u. However, the political party uses the spelling Labor, without a u. There was originally no standardised spelling of the party’s name, with Labor and Labour both in common usage. According to Ross McMullin, who wrote an official history of the Labor Party, the title page of the proceedings of the Federal Conference used the spelling “Labor” in 1902, “Labour” in 1905 and 1908, and then “Labor” from 1912 onwards.[11] In 1908, James Catts put forward a motion at the Federal Conference that “the name of the party be the Australian Labour Party”, which was carried by 22 votes to 2. A separate motion recommending state branches adopt the name was defeated. There was no uniformity of party names until 1918 when the Federal party resolved that state branches should adopt the name “Australian Labor Party”, now spelt without a u. Each state branch had previously used a different name, due to their different origins.[12][a]
Although the ALP officially adopted the spelling without a u, it took decades for the official spelling to achieve widespread acceptance.[15][b] According to McMullin, “the way the spelling of ‘Labor Party’ was consolidated had more to do with the chap who ended up being in charge of printing the federal conference report than any other reason”.[19] Some sources have attributed the official choice of Labor to influence from King O’Malley, who was born in the United States and was reputedly an advocate of English-language spelling reform; the spelling without a u is the standard form in American English.[20][21]
Andrew Scott, who wrote “Running on Empty: ‘Modernising’ the British and Australian Labour Parties”, suggests that the adoption of the spelling without a u “signified one of the ALP’s earliest attempts at modernisation”, and served the purpose of differentiating the party from the Australian labour movement as a whole and distinguishing it from other British Empire labour parties. The decision to include the word “Australian” in the party’s name, rather than just “Labour Party” as in the United Kingdom, Scott attributes to “the greater importance of nationalism for the founders of the colonial parties”.[22]
A bunch of my friends were made redundant. Some had visas dependent on their work, employer sponsored and had to leave Australia. Heck, we call it the gfc as an acronym. We just didn’t have a general recession.
I didn’t feel anything of it in Europe.
It destroyed the budget of scientific research in Spain. I was a Spanish researcher then. There was a point when universities couldn’t pay us the grant that the government had already assigned to us. Some of my peers missed rent. I heard stories of Spanish PhD. students stranded all over the world. I quit research (my dream job at the time) because of this crisis and never came back to it.
Happy you felt nothing in your happy little bubble.
I know plenty people who are currently homeless in Europe originally lost their job following the 2008 crash.
Do it, do it, do it, do it!
It’s not if but when. Hopefully sooner rather than later. And you’re a fool if you think the implications won’t be felt around the world. Just like they were when Americans rammed the housing market into the ground. We live in a global economy.
Yep, either way, your job is toast.
AI succeeds: AI takes your job.
AI fails: Economy crashes and you lose your job due to the crash.
Meh, not really a full collapse. Just like 75% of it and a huuuuge recession. Or maybe a “Tiny Depression”? Basically, 10 years to recover. Which is where we’re going anyway, with or without AI.
10 years on top of the generations to recover from economic and social policy being shat out by a deranged geriatric.
That would be awesome thank you
And if AI doesn’t fail, people will be unemployed.
Don’t threaten us with a good time.
Yes, of course
Except, i doubt anyone will be doing much with a 32 code Xenon CPU Windows snobs cant even run Windows on without a super giga 1000€ license for more than 16 Core CPUs
And the cuda only fanless and outputless GPU will also be kinda useless, especially because they all need a special setup to force feed air through the entire rack to not overheat
You clearly have a very restricted imagination about what ideas people could come up to use such hardware…
Windows snobs cant even run Windows on without a super giga 1000€ license for more than 16 Core CPUs
I’m not using Windows servers at home but if I did then a license wouldn’t be a factor when deciding what hardware to buy.
And on top: ROK ISOs by hardware vendors by HPE (and probably lenovo) don’t have the trial time limit and can be run indefinitely without a license.
You only need to satisfy the requirement of running a supported motherboard during boot of the iso.
Well…Too bad that I can (unlike in ESXi) modify the manufacturer string in proxmox to say whatever I want ¯\_(ツ)_/¯Could you point me to more info about that?
At work we sell servers by HPE.
We create Install-ISOs from the included install ISO.At boot the Installer checks if the system is manufactured by the vendor.
If it is: It continues boot and offers you the installer options
If it is not: It will fail with a message that the manufacturer doesnt match.On ESXi you need to pass the argument smbios.reflectHost = true (or something along those lines)
Dunno how HPE customized the install.wim
But you can probably get those for cheap on ebay and maybe compare the wims for differences.Yeah I’ve reinstalled Windows on Dells and it just works without any hassle because it reads something in the bios that says it already has a Windows license. I was wondering what it reads that would be configured in Proxmox to allow the same. I would be nice to be able to create a Windows VM on the fly without any license setup or license bypass tricks during/after install. Instead it would just work because Proxmox tells it to.
Once installed it doesnt bother anymore with those checks.
So right now my state is essentially an eternal windows VM that doesnt let me change the wallpaper ¯\_(ツ)_/¯
I mean it’s 64/128/256 cores for home/pro/workstation so not really. People buying aftermarket server parts that want windows can probably figure out how to type
irm https://get.activated.win/ | iexif they don’t want to pay for it anyways lol.a super giga 1000€ license for more than 16 Core CPUs
Year of the Linux Desktop! Any day now… any day… huffs copium
God, I hope so…





















