Presently trying out an account over at @[email protected] due to scraping bots bogging down lemmy.today to the point of near-unusability.

  • 8 Posts
  • 827 Comments
Joined 2 years ago
cake
Cake day: October 4th, 2023

help-circle
  • Ram is cheap

    Kind of divering from the larger point, but that’s true — RAM prices haven’t gone up as much as other things have over the years. I do kind of wonder if there are things that game engines could do to take advantage of more memory.

    I think that some of this is making games that will run on both consoles and PCs, where consoles have a pretty hard cap on how much memory they can have, so any work that gets put into improving high-memory stuff is something that console players won’t see.

    checks Wikipedia

    The XBox Series X has 16GB of unified memory.

    The Playstation 5 Pro has 16GB of unified memory and 2GB of system memory.

    You can get a desktop with 256GB of memory today, about 14 times that.

    Would have to be something that doesn’t require a lot of extra dev time or testing. Can’t do more geometry, I think, because that’d need memory on the GPU.

    considers

    Maybe something where the game can dynamically render something expensive at high resolution, and then move it into video memory.

    Like, Fallout 76 uses, IIRC, statically-rendered billboards of the 3D world for distant terrain features, like, stuff in neighboring and further off cells. You’re gonna have a fixed-size set of those loaded into VRAM at any one time. But you could cut the size of a given area that uses one set of billboards, and keep them preloaded in system memory.

    Or…I don’t know if game systems can generate simpler-geometry level-of-detail (LOD) objects in the distance or if human modelers still have to do that by hand. But if they can do it procedurally, increasing the number of LOD levels should just increase storage space, and keeping more preloaded in RAM just require more RAM. You only have one level in VRAM at a time, so it doesn’t increase demand for VRAM. That’d provide for smoother transitions as distant objects come closer.




  • Well, there’s certainly that. But even then, I’d think that a lot of videos could be made to be more concise. I was actually wondering whether YouTube creators get paid based on the amount of time they have people watch, since that’d explain drawing things out. My impression, from what I could dig up in a brief skim, is that they’re indirectly linked — apparently, YouTube shows ads periodically, and the more ads shown, the more revenue the creator gets. So there would be some level of incentive to stretch videos out.


  • Deregulation might give some amount of an edge, but I really don’t think that in 2025, the major limitation on deployment of AI systems is overbearing regulation. Rather, it’s lack of sufficient R&D work on the systems, and them needing further technical development.

    I doubt that the government can do a whole lot to try to improve the rate of R&D. Maybe research grants, but I think that industry already has plenty of capital available in the US. Maybe work visas for people doing R&D work on AI.










  • Third it has network effect going for it. Nobody is going to watch videos on your platform if there’s only a couple dozen of them total. The sheer size and scope of YouTube means no matter what you’re looking for you can find something to watch.

    Yeah, though I think that you could avoid some of that with a good cross-video-hosting service search engine, as I don’t think that most people are engaging in the social media aspect of YouTube. YouTube doesn’t have a monopoly on indexing YouTube videos.

    But the scale doesn’t hurt them, that’s for sure.


  • I did see some depth=1 or something like that to get only a certain depth of git commits but thats about it.

    Yeah, that’s a shallow clone. That reduces what it pulls down, and I did try that (you most-likely want a bit more, probably to also ask to only pull down data from a single branch) but back when I was crashing into it, that wasn’t enough for the Cataclysm repo.

    It looks like it’s fixed as of early this year; I updated my comment above.


  • Thanks. Yeah, I’m pretty sure that that was what I was hitting. Hmm. Okay, that’s actually good — so it’s not a git bug, then, but something problematic in GitHub’s infrastructure.

    EDIT: On that bug, they say that they fixed it a couple months ago:

    This seems to have been fixed at some point during the last days leading up to today (2025-03-21), thanks in part to @MarinoJurisic 's tireless efforts to convince Github support to revisit this problem!!! 🎉

    So hopefully it’s dead even specifically for GitHub. Excellent. Man, that was obnoxious.


  • A bit of banging away later — I haven’t touched Linux traffic shaping in some years — I’ve got a quick-and-dirty script to set a machine up to temporarily simulate a slow inbound interface for testing.

    slow.sh test script
    # !/bin/bash
    # Linux traffic-shaping occurs on the outbound traffic.  This script
    # sets up a virtual interface and places inbound traffic on that virtual
    # interface so that it may be rate-limited to simulate a network with a slow inbound connection.
    # Removes induced slow-down prior to exiting.  Needs to run as root.
    
    # Physical interface to slow; set as appropriate
    oif="wlp2s0"
    
    modprobe ifb numifbs=1
    ip link set dev ifb0 up
    tc qdisc add dev $oif handle ffff: ingress
    tc filter add dev $oif parent ffff: protocol ip u32 match u32 0 0 action mirred egress redirect dev ifb0
    
    tc qdisc add dev ifb0 root handle 1: htb default 10
    tc class add dev ifb0 parent 1: classid 1:1 htb rate 1mbit
    tc class add dev ifb0 parent 1:1 classid 1:10 htb rate 1mbit
    
    echo "Rate-limiting active.  Hit Control-D to exit."
    cat
    
    # shut down rate-limiting
    tc qdisc delete dev $oif ingress
    tc qdisc delete dev ifb0 root
    ip link  set dev ifb0 down
    rmmod ifb
    

    I’m going to see whether I can still reproduce that git failure for Cataclysm on git 2.47.2, which is what’s in Debian trixie. As I recall, it got a fair bit of the way into the download before bailing out. Including the script here, since I think that the article makes a good point that there probably should be more slow-network testing, and maybe someone else wants to test something themselves on a slow network.

    Probably be better to have something a little fancier to only slow traffic for one particular application — maybe create a “slow Podman container” and match on traffic going to that? — but this is good enough for a quick-and-dirty test.


  • This low bandwidth scenario led to highly aggravating scenarios, such as when a web app would time out on [Paul] while downloading a 20 MB JavaScript file, simply because things were going too slow.

    Two major applications I’ve used that don’t deal well with slow cell links:

    • Lemmyverse.net runs an index of all Threadiverse instances and all communities on all instances, and presently is an irreplaceable resource for a user on here who wants to search for a given community. It loads an enormous amount of data for the communities page, and has some sort of short timeout. Whatever it’s pulling down internally — I didn’t look — either isn’t cached or is a single file, so reloading the page restarts from the start. The net result is that it won’t work over a slow connection.

    • This may have been fixed, but git had a serious period of time where it would smash into timeouts and not work on slow links, at least to github. This made it impossible to clone larger repositories; I remember failing trying to clone the Cataclysm: Dark Days Ahead repository, where one couldn’t even manage a shallow clone. This was greatly-exacerbated by the fact that git does not presently have the ability to resume downloads if a download is interrupted. I’ve generally wound up working around this by git cloning to a machine on a fast connection, then using rsync to pull a repository over to the machine on a slow link, which, frankly, is a little embarrassing when one considers that git really is the premier distributed VCS tool out there in 2025, and really shouldn’t need to rely on that sort of workaround.


  • sabotage

    Microsoft’s interest in Nokia was being able to compete with what is now a duopoly between Google and Apple in phones. They wanted to own a mobile platform. I am very confident that they did not want their project to flop. That being said, they’ll have had their own concerns and interests. Maybe Nokia would have done better to go down the Apple or Google path, but for Microsoft, the whole point was to get Microsoft-platform hardware out there.


  • And Amazon says it will help train 4 million people in AI skills and “enable AI curricula” for 10,000 educators in the US by 2028, while offering $30 million in AWS credits for organizations using cloud and AI tech in education.

    So, at some point, we do have to move on policy, but frankly, I have a really hard time trying to predict what skillset will be particularly relevant to AI in ten years. I have a hard time knowing exactly what the state of AI itself will be in ten years.

    Like, sure, in 2025, it’s useful to learn the quirks and characteristics of LLMs or diffusion models to do things with them. I could sit down and tell people some of the things that I’ve run into. But…that knowledge also becomes obsolete very quickly. A lot of the issues and useful knowledge for, working with, say, Stable Diffusion 1.5 are essentially irrelevant as regards Flux. For LLMs, I strongly suspect that there are going to be dramatic changes surrounding reasoning, and retaining context. Like, if you put education time into training people on that, you run the risk that they don’t learn stuff that’s relevant over the longer haul.

    There have been major changes in how all of this works over the past few years, and I think that it is very likely that there will be continuing major changes.