Off-and-on trying out an account over at @[email protected] due to scraping bots bogging down lemmy.today to the point of near-unusability.


While I agree that I don’t think that an LLM is going to do the heavy lifting of making full use of Rust’s type system, I assume that Rust has some way of overriding type-induced checks. If your goal is just to get to a mechanically-equivalent-to-C++ Rust version, rather than making full use of its type system to try to make the code as correct as possible, you could maybe do that. It could provide the benefit of a starting place to start using the type system to do additional checks.


“My goal is to eliminate every line of C and C++ from Microsoft by 2030,” Microsoft distinguished engineer Galen Hunt wrote in a recent LinkedIn post.
“Our strategy is to combine AI and Algorithms to rewrite Microsoft’s largest codebases,” he added. “Our North Star is ‘1 engineer, 1 month, 1 million lines of code.’”
Well, I expect it’ll be exciting, one way or another.


Killing DIMM production.
That’s of the three major companies that make RAM chips.
There are other companies that make DIMMs. They just buy chips from the RAM chip manufacturers to do it. PNY or Kingston, say.
Micron was just doing a vertically-integrated thing where they did both the chips and DIMMs.
EDIT: Looking back at the article, it does say that.


I mean, efficient in terms of memory utilization, like. Obviously there are gonna be associated costs and drawbacks with having remote compute.
Just that if the world has only N GB of RAM, you can probably get more out of it on some system running a bunch of VMs, where any inactive memory gets used by some other VM.


Apparently there are m.2 NVMe drives with DRAM caches.
I don’t know if anyone makes a pure DRAM NVMe drive — it’d forget its contents every boot — but if so, on Linux, you could make the block device a swap partition.


https://en.wikipedia.org/wiki/Zram
One of the mechanisms for compressing memory in Linux. Trades CPU time for effectively having more RAM Recent versions of Fedora apparently have it on by default.
I’ve read that zswap, another mechanism, is preferable on newer systems with NVMe/SSD, where paging isn’t as painful; that only compresses pages going to swap, but requires that you actually have some swap. I haven’t used either.
Probably someone should try benchmarking them for various workloads if systems are going to be running on much less memory for a while. Was more of an edge case thing that not many people cared about, but if operating with less memory is suddenly more important, might have broader interest.
On Linux, also possible to opt for lighter-on-memory versions of a lot of software that you’re kinda committing to using the Microsoft-provided version of on Windows. File browser, compositor, etc.


https://obsolescence.wixsite.com/obsolescence/cpm-internals
CP/M requires a minimum of 20K RAM, although realistically, 48K is the bare minimum. Most systems have the maximum 64K.
Sounds like it can’t address > 2¹⁶ bytes.


Honestly, it’ll be more efficient to have memory in a datacenter in that hardware in a datacenter will see higher average capacity utilization, but it’s gonna drive up datacenter prices too.


I don’t think that the NVMe shortage is that big of a deal in terms of using it for swap. It’s much cheaper than DRAM per GB. You don’t need thst much.


Windows 11 can run on 4GB. That’s the minimum for the listed requirements, and the other day, I saw Best Buy selling a 4GB model, and I see some systems for sale online. I would imagine that it’s not ideal.


https://www.microsoft.com/en-us/windows/windows-11-specifications
Minimum system requirements for Copilot+ PCs
RAM: 16 GB DDR5/LPDDR5
I think that OpenAI has probably kind of bashed a hole in the bottom of Microsoft’s boat on the local AI stuff, if 8GB is going to be midrange.


mid-range laptops to 8GB
My not-terribly-new phone has 12GB of memory, and I’m pretty sure that Android is a lot lighter on memory than the Windows 11 that I suspect a lot of these are going to be running.


Biometrics are irrevocable. If you’re worried about stolen personal data, they are not what I would be moving to.


DDR4 RAM is presently cheaper than DDR5, but it has also increased dramatically in price recently.
https://pcpartpicker.com/trends/price/memory/
DDR4:
https://lemmy.today/pictrs/image/ed889201-f9e6-46ec-81a8-832f6bfc63ed.jpeg

DDR5:
https://lemmy.today/pictrs/image/35d03746-8d9c-443f-808f-8c88f2914b73.jpeg

Kismet can use a GPS sensor and multiple WiFi strength readings as one moves around to do a pretty good job of mapping WAPs.
I’ve been kind of disappointed that F-Droid doesn’t appear to have any program using Android’s Location Services with high-resolution positioning to build a map of the location of nearby Bluetooth devices.
I have, on occasion, not been able to remember where I set my Bluetooth headphones.


Ah, thanks. Looks like they enabled zram in Fedora 33:
https://fedoraproject.org/wiki/Changes/SwapOnZRAM#Why_not_zswap?


I commented elsewhere in the thread that one option that can mitigate limited RAM for some users is to get a fast, dedicated NVMe swap device, stick a large pagefile/paging partition on it, and let the OS page out stuff that isn’t actively being used. Flash memory prices are up too, but are vastly cheaper than RAM.
My guess is that this generally isn’t the ideal solution for situations where one RAM-hungry game is what’s eating up all the memory, but for some things you mention (like wanting to leave a bunch of browser tabs open while going to play a game), I’d expect it to be pretty effective.
dev tasks, builds…etc
I don’t know how applicable it is to your use case, but there’s ccache to cache compiled binaries and distcc to do distributed C/C++ builds across multiple machines, if you can coral up some older machines.
It looks like Mozilla’s sccache does both caching and distributed builds, and supports Rust as well. I haven’t used it myself.


The big unknown that’s been a popular topic of discussion is whether Valve locked in a long-running contract for the hardware before the RAM price increases happened. If they did, then they can probably offer favorable prices, and they’re probably sitting pretty. If not, then they won’t.
My guess is that they didn’t, since:
They announced that they would hold off on announcing pricing due to still working on figuring out the hardware cost (which I suspect very likely includes the RAM situation).
I’d bet that they have a high degree of risk in the number of units that the Steam Machine 2.0 will sell. The Steam Deck was an unexpectedly large success. Steam Machine 1.0 kinda flopped. Steam Machine 2.0 could go down either route. They probably don’t want to contract to have a ton of units built and then have huge oversupply. Even major PC vendors like Dell and Lenovo got blindsided and were unprepared, and I suspect that they’re in a much less-risky position to commit to a given level of sales and doing long-running purchases than Valve is.
I’ve even seen some articles propose that the radical increase in RAM prices might cause Steam Machine 2.0’s release to be postponed, if Valve didn’t have long-running contracts in place and doesn’t think that it can succeed at a higher price point than they anticipated.
“Partial workaround” wou’d probably be more accurate. As the article body points out, DDR5 SO-DIMM prices are also up, albeit not as much as DDR5 DIMM prices.
But it’s substantial enough of a price difference to be interesting, especially with larger-capacity SO-DIMMs.
EDIT: For those not familiar, SO-DIMMs are “laptop memory” and DIMMs are “desktop memory”.