Off-and-on trying out an account over at @[email protected] due to scraping bots bogging down lemmy.today to the point of near-unusability.

  • 18 Posts
  • 1.12K Comments
Joined 2 years ago
cake
Cake day: October 4th, 2023

help-circle







  • I was about to say that I knew that COVID-19 caused video game sales to surge, and then crash, and there was over-hiring that had happened in response to those sales, but a third seems like an insanely high number.

    Looking at WP, it sounds like the surge was actually that high…but for mobile OS games and console games, not PC, where the surge was much more muted. I also hadn’t realized that mobile OS video game spending had become that much larger than PC spending.

    https://en.wikipedia.org/wiki/2022–2025_video_game_industry_layoffs

    The COVID-19 pandemic led to an increase in interest in gaming globally, and was a period of dramatic expansion in the industry, with many mergers and acquisitions conducted. In many cases companies over-expanded, as this rapid COVID-era growth was unsustainable. The industry began to slow in 2022, and amid spiralling costs and a shift in consumer habits, layoffs began.

    The first few months of the COVID-19 pandemic brought about a sharp increase in revenue for the gaming sector worldwide as people looked for indoor entertainment.[56] According to IDC, in 2020, revenue from mobile games climbed by 32.8% to $99.9 billion, while expenditure on digital PC and Mac games increased by 7.4% to $35.6 billion.[57] The amount spent on home console games increased significantly as well, reaching $42.9 billion, up 33.9%.[58][59]

    In the ensuing years, this growing pattern abruptly stopped.[60] Revenue growth from mobile gaming fell by 15% in 2021, and then fell even further in 2022 and 2023, to -3.3% and -3.1%, respectively. Sales of PC and Mac games saw a brief rise of 8.7% in 2021, a drop of 1.4% in 2022, and a rebound of 2.1% in 2023.[61] Similarly, after a surge in 2020, console game spending plateaued in 2021 with growth at 0.7%, followed by a decline of 3.4% in 2022, before returning to growth at 5.9% in 2023.[59][62]

    EDIT: Based on those numbers, the surge in mobile and console sales combined was basically equivalent in value to the entirety of PC video game sales. It’s like the equivalent of the entire PC video gaming industry materialized, existed for a few years, and then disappeared.


  • I mean, human environments are intrinsically made for humanoids to navigate. Like, okay, we put stairs places, things like that. So in theory, yeah, a humanoid form makes sense if you want to stick robots in a human environment.

    But in practice, I think that there are all kinds of problems to be solved with humans and robots interacting in the same space and getting robots to do human things. Even just basic safety stuff, much less being able to reasonably do general interactions in a human environment. Tesla spent a long time on FSD for its vehicles, and that’s a much-more-limited-scope problem.

    Like, humanoid robots have been a thing in sci-fi for a long time, but I’m not sold that they’re a great near-term solution.

    If you ever look at those Boston Dynamics demos, you’ll note that they do them in a (rather-scuffed-up) lab with safety glass and barriers and all that.

    I’m not saying that it’s not possible to make a viable humanoid robot at some point. But I don’t think that the kind of thing that Musk has claimed it’ll be useful for:

    “It’ll do anything you want,” Musk said. “It can be a teacher, babysit your kids; it can walk your dog, mow your lawn, get the groceries; just be your friend, serve drinks. Whatever you can think of, it will do.”

    …a sort of Rosie The Robot from The Jetsons, is likely going to be at all reasonable for quite some time.




  • I just am not sold that there’s enough of a market, not with the current games and current prices.

    There are several different types of HMDs out there. I haven’t seen anyone really break them up into classes, but if I were to take a stab at it:

    • VR gaming googles. These focus on providing an expansive image that fills the peripheral vision, and cut one off from the world. The Valve Index would be an example.

    • AR goggles. I personally don’t like the term. It’s not that augmented reality isn’t a real thing, but that we don’t really have the software out there to do AR things, and so while theoretically these could be used for augmented reality, that’s not their actual, 2026 use case. But, since the industry uses it, I will. These tend to display an image covering part of one’s visual field which one can see around and maybe through. Xreal’s offerings are an example.

    • HUD glasses. These have a much more limited display, or maybe none at all. These are aimed at letting one record what one is looking at less-obtrusively, maybe throw up notifications from a phone silently, things like the Ray-Ban Meta.

    • Movie-viewers. These things are designed around isolation, but don’t need head-tracking. They may be fine with relatively-low resolution or sharpness. A Royole Moon, for example.

    For me, the most-exciting prospect for HMDs is the idea of a monitor replacement. That is, I’d be most-interested in something that does basically what my existing displays do, but in a lower-power, more-portable, more-private form. If it can also do VR, that’d be frosting on the cake, but I’m really principally interested in something that can be a traditional monitor, but better.

    For me, at least, none of the use cases for the above classes of HMDs are super-compelling.

    For movie-viewing. It just isn’t that often that I feel that I need more isolation than I can already get to watch movies. A computer monitor in a dark room is just fine. I can also put things on a TV screen or a projector that I already have sitting around and I generally don’t bother to turn on. If I want to block out outside sound more, I might put on headphones, but I just don’t need more than that. Maybe for someone who is required to be in noisy, bright environments or something, but it just isn’t a real need for me.

    For HUD glasses, I don’t really have a need for more notifications in my field of vision — I don’t need to give my phone a HUD.

    AR could be interesting if the augmented reality software library actually existed, but in 2026, it really doesn’t. Today, AR glasses are mostly used, as best I can tell, as an attempt at a monitor replacement, but the angular pixel density on them is poor compared to conventional displays. Like, in terms of the actual data that I can shove into my eyeballs in the center of my visual field, which is what matters for things like text, I’m better off with conventional monitors in 2026.

    VR gaming could be interesting, but the benefits just aren’t that massive for the games that I play. You get a wider field of view than a traditional display offers, the ability to use your head as an input for camera control. There are some genres that I think that it works well with today, like flight sims. If you were a really serious flight-simmer, I could see it making sense. But most genres just don’t benefit that much from it. Yeah, okay, you can play Tetris Effect: Connected in VR, but it doesn’t really change the game all that much.

    A lot of the VR-enabled titles out there are not (understandably, given the size of the market) really principally aimed at taking advantage of the goggles. You’re basically getting a port of a game aimed at probably a keyboard and mouse, with some tradeoffs.

    And for VR, one has to deal with more setup time, software and hardware issues, and the cost. I’m not terribly price sensitive on gaming compared to most, but if I’m getting a peripheral for, oh, say, $1k, I have to ask how seriously I’m going to play any of the games that I’m buying this hardware for. I have a HOTAS system with flight pedals; it mostly just gathers dust, because I don’t play many WW2 flight sims these days, and the flight sims out there today are mostly designed around thumbsticks. I don’t need to accumulate more dust-collectors like that. And with VR the hardware ages out pretty quickly. I can buy a conventional monitor today and it’ll still be more-or-less competitive for most uses probably ten or twenty years down the line. VR goggles? Not so much.

    At least for me, the main things that I think that I’d actually get some good out of VR goggles on:

    • Vertical-orientation games. My current monitors are landscape aspect ratio, and don’t support rotating (though I imagine that there might be someone that makes a rotating VESA mount pivot, and I could probably use wlr-randr to make Wayland change the display orientation manually) Some games in the past in arcades had something like a 3:4 portrait mode aspect ratio. If you’re playing one of those, you could maybe get some extra vertical space. But unless I need the resolution or portability, I can likely achieve something like that by just moving my monitor closer while playing such a game.

    • Pinball sims, for the same reason.

    • There are a couple of VR-only games that I’d probably like to play (none very new).

    • Flight sims. I’m not really a super-hardcore flight simmer. But, sure, for WW2 flight sims or something like Elite: Dangerous, it’s probably nice.

    • I’d get a little more immersiveness out of some games that are VR-optional.

    But…that’s just not that overwhelming a set of benefits to me.

    Now, I am not everyone. Maybe other people value other things. And I do think that it’s possible to have a “killer app” for VR, some new game that really takes advantage of VR and is so utterly compelling that people feel that they’d just have to get VR goggles so as to not miss out. Something like what World of Warcraft did for MMO gaming, say. But the VR gaming effort has been going on for something like a decade now, and nothing like that has really turned up.



  • tal@lemmy.todaytoGames@lemmy.worldr/Silksong joins lemmy!
    link
    fedilink
    English
    arrow-up
    5
    ·
    9 days ago

    Plus, I mean, unless you’re using a Threadiverse host as your home instance, how often are you typing its name?

    Having a hyphen is RFC-conformant:

    RFC 952:

    1. A "name" (Net, Host, Gateway, or Domain name) is a text string up
    to 24 characters drawn from the alphabet (A-Z), digits (0-9), minus
    sign (-), and period (.).  Note that periods are only allowed when
    they serve to delimit components of "domain style names". (See
    RFC-921, "Domain Name System Implementation Schedule", for
    background).  No blank or space characters are permitted as part of a
    name. No distinction is made between upper and lower case.  The first
    character must be an alpha character.  The last character must not be
    a minus sign or period.  A host which serves as a GATEWAY should have
    "-GATEWAY" or "-GW" as part of its name.  Hosts which do not serve as
    Internet gateways should not use "-GATEWAY" and "-GW" as part of
    their names. A host which is a TAC should have "-TAC" as the last
    part of its host name, if it is a DoD host.  Single character names
    or nicknames are not allowed.
    

    RFC 1123:

       The syntax of a legal Internet host name was specified in RFC-952
       [DNS:4].  One aspect of host name syntax is hereby changed: the
       restriction on the first character is relaxed to allow either a
       letter or a digit.  Host software MUST support this more liberal
       syntax.
    
       Host software MUST handle host names of up to 63 characters and
       SHOULD handle host names of up to 255 characters.
    


  • Unless you have some really serious hardware, 24 billion parameters is probably the maximum that would be practical for self-hosting on a reasonable hobbyist set-up.

    Eh…I don’t know if you’d call it “really serious hardware”, but when I picked up my 128GB Framework Desktop, it was $2k (without storage), and that box is often described as being aimed at the hobbyist AI market. That’s pricier than most video cards, but an AMD Radeon RX 7900 XTX GPU was north of $1k, an NVidia RTX 4090 was about $2k, and it looks like the NVidia RTX 5090 is presently something over $3k (and rising) on EBay, well over MSRP. None of those GPUs are dedicated hardware aimed at doing AI compute, just high-end cards aimed at playing games that people have used to do AI stuff on.

    I think that the largest LLM I’ve run on the Framework Desktop was a 106 billion parameter GLM model at Q4_K_M quantization. It was certainly usable, and I wasn’t trying to squeeze as large a model as possible on the thing. I’m sure that one could run substantially-larger models.

    EDIT: Also, some of the newer LLMs are MoE-based, and for those, it’s not necessarily unreasonable to offload expert layers to main memory. If a particular expert isn’t being used, it doesn’t need to live in VRAM. That relaxes some of the hardware requirements, from needing a ton of VRAM to just needing a fair bit of VRAM plus a ton of main memory.


  • That’s why they have the “Copilot PC” hardware requirement, because they’re using an NPU on the local machine.

    searches

    https://learn.microsoft.com/en-us/windows/ai/npu-devices/

    Copilot+ PCs are a new class of Windows 11 hardware powered by a high-performance Neural Processing Unit (NPU) — a specialized computer chip for AI-intensive processes like real-time translations and image generation—that can perform more than 40 trillion operations per second (TOPS).

    It’s not…terribly beefy. Like, I have a Framework Desktop with an APU and 128GB of memory that schlorps down 120W or something, substantially outdoes what you’re going to do on a laptop. And that in turn is weaker computationally than something like the big Nvidia hardware going into datacenters.

    But it is doing local computation.


  • I’m kind of more-sympathetic to Microsoft than to some of the other companies involved.

    Microsoft is trying to leverage the Windows platform that they control to do local LLM use. I’m not at all sure that there’s actually enough memory out there to do that, or that it’s cost-effective to put a ton of memory and compute capacity in everyone’s home rather than time-sharing hardware in datacenters. Nor am I sold that laptops — which many “Copilot PCs” are — are a fantastic place to be doing a lot of heavyweight parallel compute.

    But…from a privacy standpoint, I kind of would like local LLMs to be at least available, even if they aren’t as affordable as cloud-based stuff. And at least Microsoft is at least supporting that route. A lot of companies are going to be oriented towards just doing AI stuff in the cloud.


  • You only need one piece of (timeless) advice regarding what to look for, really: if it looks too good to be true, it almost certainly is. Caveat emptor.

    I mean…normally, yes, but because the situation has been changing so radically in such a short period of time, it probably is possible to get some bonkers deals in various niches, because the market hasn’t stabilized yet.

    Like, a month and a half back, in early December, when prices had only been going up like crazy for a little while, I was posting some tiny retailers that still had RAM in stock at pre-price-increase rates that I could find on Google Shopping. IIRC the University of Virginia bookstore was one, as they didn’t check that purchasers were actually students. I warned that they’d probably be cleaned out as soon as scalpers got to them, and that if someone wanted memory, they should probably get it ASAP. Some days prior to that, there was a small PC parts store in Hawaii that had some (though that was out of stock by the next time I was looking and mentioned the bookstore).

    That’s not to disagree with the point that @[email protected] is making, that this was awfully sketchy as a source, or your point that scavenging components off even a non-scam piece of secondhand non-functional hardware is risky. But in times of rapid change, it’s not impossible to find deals. In fact, it’s various parties doing so that cause prices to stabilize — anyone selling memory for way below market price is going to have scalpers grab it.


  • I’m not really a hardware person, but purely in terms of logic gates, making a memory circuit isn’t going to be hard. I mean, a lot of chips contain internal memory. I’m sure that anyone that can fabricate a chip can fabricate someone’s memory design that contains some amount of memory.

    For PC use, there’s also going to be some interface hardware. Dunno how much sophistication is present there.

    I’m assuming that the catch is that it’s not trivial to go out and make something competitive with what the PC memory manufacturers are making in price, density, and speed. Like, I don’t think that if you want to get a microcontroller with 32 kB of onboard memory, that it’s going to be a problem. But that doesn’t really replace the kind of stuff that these guys are making.

    EDIT: The other big thing to keep in mind is that this is a short-term problem, even if it’s a big problem. I mean, the problem isn’t the supply of memory over the long term. The problem is the supply of memory over the next couple of years. You can’t just build a factory and hire a workforce and get production going the moment that someone decides that they want several times more memory than the world has been producing to date.

    So what’s interesting is really going to be solutions that can produce memory in the near term. Like, I have no doubt that given years of time, someone could set up a new memory manufacturer and facilities. But to get (scaled-up) production in a year, say? Fewer options there.