Off-and-on trying out an account over at @[email protected] due to scraping bots bogging down lemmy.today to the point of near-unusability.

  • 18 Posts
  • 1.13K Comments
Joined 2 years ago
cake
Cake day: October 4th, 2023

help-circle
  • For some workloads, yes. I don’t think that the personal computer is going to go away.

    But it also makes a lot of economic and technical sense for some of those workloads.

    Historically — like, think up to about the late 1970s — useful computing hardware was very expensive. And most people didn’t have a requirement to keep computing hardware constantly loaded. In that kind of environment, we built datacenters and it was typical to time-share them. You’d use something like a teletype or some other kind of thin client to access a “real” computer to do your work.

    What happened at the end of the 1970s was that prices came down enough and there was enough capability to do useful work to start putting personal computers in front of everyone. You had enough useful capability to do real computing work locally. They were still quite expensive compared to the great majority of today’s personal computers:

    https://en.wikipedia.org/wiki/Apple_II

    The original retail price of the computer was US$1,298 (equivalent to $6,700 in 2024)[18][19] with 4 KB of RAM and US$2,638 (equivalent to $13,700 in 2024) with the maximum 48 KB of RAM.

    But they were getting down to the point where they weren’t an unreasonable expense for people who had a use for them.

    At the time, telecommunications infrastructure was much more limited than it was today, so using a “real” computer remotely from many locations was a pain, which also made the PC make sense.

    From about the late 1970s to today, the workloads that have dominated most software packages have been more-or-less serial computation. While “big iron” computers could do faster serial compute than personal computers, it wasn’t radically faster. Video games with dedicated 3D hardware were a notable exception, but those were latency sensitive and bandwidth intensive, especially relative to the available telecommunication infrastructure, so time-sharing remote “big iron” hardware just didn’t make a lot of sense.

    And while we could — and to some extent, did — ramp up serial computational capacity by using more power, there were limits on the returns we could get.

    However, what AI stuff represents has notable differences in workload characteristics. AI requires parallel processing. AI uses expensive hardware. We can throw a lot of power at things to get meaningful, useful increases in compute capability.

    • Just like in the 1970s, the hardware to do competitive AI stuff for many things that we want to do is expensive. Some of that is just short term, like the fact that we don’t have the memory manufacturing capacity in 2026 to meet need, so prices will rise to price out sufficient people that the available chips go to whoever the highest bidders are. That’ll resolve itself one way or another, like via buildout in memory capacity. But some of it is also that the quantities of memory are still pretty expensive. Even at pre-AI-boom prices, if you want the kind of memory that it’s useful to have available — hundreds of gigabytes — you’re going to be significantly increasing the price of a PC, and that’s before whatever the cost of the computation hardware is.

    • Power. Currently, we can usefully scale out parallel compute by using a lot more power. Under current regulations, a laptop that can go on an airline in the US can have an 100 Wh battery and a 100 Wh spare, separate battery. If you pull 100W on a sustained basis, you blow through a battery like that in an hour. A desktop can go further, but is limited by heat and cooling and is going to start running into a limit for US household circuits at something like 1800 W, and is going to be emitting a very considerable amount of heat dumped into a house at that point. Current NVidia hardware pulls over 1kW. A phone can’t do anything like any of the above. The power and cooling demands range from totally unreasonable to at least somewhat problematic. So even if we work out the cost issues, I think that it’s very likely that the power and cooling issues will be a fundamental bound.

    In those conditions, it makes sense for many users to stick the hardware in a datacenter with strong cooling capability and time-share it.

    Now, I personally really favor having local compute capability. I have a dedicated computer, a Framework Desktop, to do AI compute, and also have a 24GB GPU that I bought in significant part to do that. I’m not at all opposed to doing local compute. But at current prices, unless that kind of hardware can provide a lot more benefit than it currently does to most, most people are probably not going to buy local hardware.

    If your workload keeps hardware active 1% of the time — and maybe use as a chatbot might do that — then it is something like a hundred times cheaper in terms of the hardware cost to have the hardware timeshared. If the hardware is expensive — and current Nvidia hardware runs tens of thousands of dollars, too rich for most people’s taste unless they’re getting Real Work done with the stuff — it looks a lot more appealing to time-share it.

    There are some workloads for which there might be constant load, like maybe constantly analyzing speech, doing speech recognition. For those, then yeah, local hardware might make sense. But…if weaker hardware can sufficiently solve that problem, then we’re still back to the “expensive hardware in the datacenter” thing.

    Now, a lot of Nvidia’s costs are going to be fixed, not variable. And assuming that AMD and so forth catch up, in a competitive market, will come down — with scale, one can spread fixed costs out, and only the variable costs will place a floor on hardware costs. So I can maybe buy that, if we hit limits that mean that buying a ton of memory isn’t very interesting, price will come down. But I am not at all sure that the “more electrical power provides more capability” aspect will change. And as long as that holds, it’s likely going to make a lot of sense to use “big iron” hardware remotely.

    What you might see is a computer on the order of, say, a 2022 computer on everyone’s desk…but that a lot of parallel compute workloads are farmed out to datacenters, which have computers more-capable of doing parallel compute there.

    Cloud gaming is a thing. I’m not at all sure that there the cloud will dominate, even though it can leverage parallel compute. There, latency and bandwidth are real issues. You’d have to put enough datacenters close enough to people to make that viable and run enough fiber. And I’m not sure that we’ll ever reach the point where it makes sense to do remote compute for cloud gaming for everyone. Maybe.

    But for AI-type parallel compute workloads, where the bandwidth and latency requirements are a lot less severe, and the useful returns from throwing a lot of electricity at the thing significant…then it might make a lot more sense.

    I’d also point out that my guess is that AI probably will not be the only major parallel-compute application moving forward. Unless we can find some new properties in physics or something like that, we just aren’t advancing serial compute very rapidly any more; things have slowed down for over 20 years now. If you want more performance, as a software developer, there will be ever-greater relative returns from parallelizing problems and running them on parallel hardware.

    I don’t think that, a few years down the road, building a computer comparable to the one you might in 2024 is going to cost more than it did in 2024. I think that people will have PCs.

    But those PCs might running software that will be doing an increasing amount of parallel compute in the cloud, as the years go by.


  • GitHub explicitly asked Homebrew to stop using shallow clones. Updating them was “an extremely expensive operation” due to the tree layout and traffic of homebrew-core and homebrew-cask.

    I’m not going through the PR to understand what’s breaking, since it’s not immediately apparent from a quick skim. But three possible problems based on what people are mentioning there.

    The problem is the cost of the shallow clone

    Assuming that the workload here is always --depth=1 and they aren’t doing commits at a high rate relative to clones, and that’s an expensive operation for git, I feel like for GitHub, a better solution would be some patch to git that allows it to cache a shallow clone for depth=1 for a given hashref.

    The problem is the cost of unshallowing the shallow clone

    If the actual problem isn’t the shallow clone, that a regular clone would be fine, but that unshallowing is a problem, then a patch to git that allows more-efficient unshallowing should be a better solution. I mean, I’d think that unshallowing should only need a time-ordered index of commits referenced blobs up to a given point. That shouldn’t be that expensive for git to maintain an index of, if it doesn’t already have it.

    The problem is that Homebrew has users repeatedly unshallowing a clone off GitHub and then blowing it away and repeating

    If the problem is that people keep repeatedly doing a clone off GitHub — that is, a regular, non-shallow clone would also be problematic — I’d think that a better solution would be to have Homebrew do a local bare clone as a cache, and then just do a pull on that cache and then use it as a reference to create the new clone. If Homebrew uses the fresh clone as read-only and the cache can be relied upon to remain, then they could use --reference alone. If not, then add --dissociate. I’d think that that’d lead to better performance anyway.












  • I was about to say that I knew that COVID-19 caused video game sales to surge, and then crash, and there was over-hiring that had happened in response to those sales, but a third seems like an insanely high number.

    Looking at WP, it sounds like the surge was actually that high…but for mobile OS games and console games, not PC, where the surge was much more muted. I also hadn’t realized that mobile OS video game spending had become that much larger than PC spending.

    https://en.wikipedia.org/wiki/2022–2025_video_game_industry_layoffs

    The COVID-19 pandemic led to an increase in interest in gaming globally, and was a period of dramatic expansion in the industry, with many mergers and acquisitions conducted. In many cases companies over-expanded, as this rapid COVID-era growth was unsustainable. The industry began to slow in 2022, and amid spiralling costs and a shift in consumer habits, layoffs began.

    The first few months of the COVID-19 pandemic brought about a sharp increase in revenue for the gaming sector worldwide as people looked for indoor entertainment.[56] According to IDC, in 2020, revenue from mobile games climbed by 32.8% to $99.9 billion, while expenditure on digital PC and Mac games increased by 7.4% to $35.6 billion.[57] The amount spent on home console games increased significantly as well, reaching $42.9 billion, up 33.9%.[58][59]

    In the ensuing years, this growing pattern abruptly stopped.[60] Revenue growth from mobile gaming fell by 15% in 2021, and then fell even further in 2022 and 2023, to -3.3% and -3.1%, respectively. Sales of PC and Mac games saw a brief rise of 8.7% in 2021, a drop of 1.4% in 2022, and a rebound of 2.1% in 2023.[61] Similarly, after a surge in 2020, console game spending plateaued in 2021 with growth at 0.7%, followed by a decline of 3.4% in 2022, before returning to growth at 5.9% in 2023.[59][62]

    EDIT: Based on those numbers, the surge in mobile and console sales combined was basically equivalent in value to the entirety of PC video game sales. It’s like the equivalent of the entire PC video gaming industry materialized, existed for a few years, and then disappeared.


  • I mean, human environments are intrinsically made for humanoids to navigate. Like, okay, we put stairs places, things like that. So in theory, yeah, a humanoid form makes sense if you want to stick robots in a human environment.

    But in practice, I think that there are all kinds of problems to be solved with humans and robots interacting in the same space and getting robots to do human things. Even just basic safety stuff, much less being able to reasonably do general interactions in a human environment. Tesla spent a long time on FSD for its vehicles, and that’s a much-more-limited-scope problem.

    Like, humanoid robots have been a thing in sci-fi for a long time, but I’m not sold that they’re a great near-term solution.

    If you ever look at those Boston Dynamics demos, you’ll note that they do them in a (rather-scuffed-up) lab with safety glass and barriers and all that.

    I’m not saying that it’s not possible to make a viable humanoid robot at some point. But I don’t think that the kind of thing that Musk has claimed it’ll be useful for:

    “It’ll do anything you want,” Musk said. “It can be a teacher, babysit your kids; it can walk your dog, mow your lawn, get the groceries; just be your friend, serve drinks. Whatever you can think of, it will do.”

    …a sort of Rosie The Robot from The Jetsons, is likely going to be at all reasonable for quite some time.




  • I just am not sold that there’s enough of a market, not with the current games and current prices.

    There are several different types of HMDs out there. I haven’t seen anyone really break them up into classes, but if I were to take a stab at it:

    • VR gaming googles. These focus on providing an expansive image that fills the peripheral vision, and cut one off from the world. The Valve Index would be an example.

    • AR goggles. I personally don’t like the term. It’s not that augmented reality isn’t a real thing, but that we don’t really have the software out there to do AR things, and so while theoretically these could be used for augmented reality, that’s not their actual, 2026 use case. But, since the industry uses it, I will. These tend to display an image covering part of one’s visual field which one can see around and maybe through. Xreal’s offerings are an example.

    • HUD glasses. These have a much more limited display, or maybe none at all. These are aimed at letting one record what one is looking at less-obtrusively, maybe throw up notifications from a phone silently, things like the Ray-Ban Meta.

    • Movie-viewers. These things are designed around isolation, but don’t need head-tracking. They may be fine with relatively-low resolution or sharpness. A Royole Moon, for example.

    For me, the most-exciting prospect for HMDs is the idea of a monitor replacement. That is, I’d be most-interested in something that does basically what my existing displays do, but in a lower-power, more-portable, more-private form. If it can also do VR, that’d be frosting on the cake, but I’m really principally interested in something that can be a traditional monitor, but better.

    For me, at least, none of the use cases for the above classes of HMDs are super-compelling.

    For movie-viewing. It just isn’t that often that I feel that I need more isolation than I can already get to watch movies. A computer monitor in a dark room is just fine. I can also put things on a TV screen or a projector that I already have sitting around and I generally don’t bother to turn on. If I want to block out outside sound more, I might put on headphones, but I just don’t need more than that. Maybe for someone who is required to be in noisy, bright environments or something, but it just isn’t a real need for me.

    For HUD glasses, I don’t really have a need for more notifications in my field of vision — I don’t need to give my phone a HUD.

    AR could be interesting if the augmented reality software library actually existed, but in 2026, it really doesn’t. Today, AR glasses are mostly used, as best I can tell, as an attempt at a monitor replacement, but the angular pixel density on them is poor compared to conventional displays. Like, in terms of the actual data that I can shove into my eyeballs in the center of my visual field, which is what matters for things like text, I’m better off with conventional monitors in 2026.

    VR gaming could be interesting, but the benefits just aren’t that massive for the games that I play. You get a wider field of view than a traditional display offers, the ability to use your head as an input for camera control. There are some genres that I think that it works well with today, like flight sims. If you were a really serious flight-simmer, I could see it making sense. But most genres just don’t benefit that much from it. Yeah, okay, you can play Tetris Effect: Connected in VR, but it doesn’t really change the game all that much.

    A lot of the VR-enabled titles out there are not (understandably, given the size of the market) really principally aimed at taking advantage of the goggles. You’re basically getting a port of a game aimed at probably a keyboard and mouse, with some tradeoffs.

    And for VR, one has to deal with more setup time, software and hardware issues, and the cost. I’m not terribly price sensitive on gaming compared to most, but if I’m getting a peripheral for, oh, say, $1k, I have to ask how seriously I’m going to play any of the games that I’m buying this hardware for. I have a HOTAS system with flight pedals; it mostly just gathers dust, because I don’t play many WW2 flight sims these days, and the flight sims out there today are mostly designed around thumbsticks. I don’t need to accumulate more dust-collectors like that. And with VR the hardware ages out pretty quickly. I can buy a conventional monitor today and it’ll still be more-or-less competitive for most uses probably ten or twenty years down the line. VR goggles? Not so much.

    At least for me, the main things that I think that I’d actually get some good out of VR goggles on:

    • Vertical-orientation games. My current monitors are landscape aspect ratio, and don’t support rotating (though I imagine that there might be someone that makes a rotating VESA mount pivot, and I could probably use wlr-randr to make Wayland change the display orientation manually) Some games in the past in arcades had something like a 3:4 portrait mode aspect ratio. If you’re playing one of those, you could maybe get some extra vertical space. But unless I need the resolution or portability, I can likely achieve something like that by just moving my monitor closer while playing such a game.

    • Pinball sims, for the same reason.

    • There are a couple of VR-only games that I’d probably like to play (none very new).

    • Flight sims. I’m not really a super-hardcore flight simmer. But, sure, for WW2 flight sims or something like Elite: Dangerous, it’s probably nice.

    • I’d get a little more immersiveness out of some games that are VR-optional.

    But…that’s just not that overwhelming a set of benefits to me.

    Now, I am not everyone. Maybe other people value other things. And I do think that it’s possible to have a “killer app” for VR, some new game that really takes advantage of VR and is so utterly compelling that people feel that they’d just have to get VR goggles so as to not miss out. Something like what World of Warcraft did for MMO gaming, say. But the VR gaming effort has been going on for something like a decade now, and nothing like that has really turned up.



  • tal@lemmy.todaytoGames@lemmy.worldr/Silksong joins lemmy!
    link
    fedilink
    English
    arrow-up
    5
    ·
    16 days ago

    Plus, I mean, unless you’re using a Threadiverse host as your home instance, how often are you typing its name?

    Having a hyphen is RFC-conformant:

    RFC 952:

    1. A "name" (Net, Host, Gateway, or Domain name) is a text string up
    to 24 characters drawn from the alphabet (A-Z), digits (0-9), minus
    sign (-), and period (.).  Note that periods are only allowed when
    they serve to delimit components of "domain style names". (See
    RFC-921, "Domain Name System Implementation Schedule", for
    background).  No blank or space characters are permitted as part of a
    name. No distinction is made between upper and lower case.  The first
    character must be an alpha character.  The last character must not be
    a minus sign or period.  A host which serves as a GATEWAY should have
    "-GATEWAY" or "-GW" as part of its name.  Hosts which do not serve as
    Internet gateways should not use "-GATEWAY" and "-GW" as part of
    their names. A host which is a TAC should have "-TAC" as the last
    part of its host name, if it is a DoD host.  Single character names
    or nicknames are not allowed.
    

    RFC 1123:

       The syntax of a legal Internet host name was specified in RFC-952
       [DNS:4].  One aspect of host name syntax is hereby changed: the
       restriction on the first character is relaxed to allow either a
       letter or a digit.  Host software MUST support this more liberal
       syntax.
    
       Host software MUST handle host names of up to 63 characters and
       SHOULD handle host names of up to 255 characters.