Off-and-on trying out an account over at @[email protected] due to scraping bots bogging down lemmy.today to the point of near-unusability.

  • 18 Posts
  • 1.11K Comments
Joined 2 years ago
cake
Cake day: October 4th, 2023

help-circle
  • I just am not sold that there’s enough of a market, not with the current games and current prices.

    There are several different types of HMDs out there. I haven’t seen anyone really break them up into classes, but if I were to take a stab at it:

    • VR gaming googles. These focus on providing an expansive image that fills the peripheral vision, and cut one off from the world. The Valve Index would be an example.

    • AR goggles. I personally don’t like the term. It’s not that augmented reality isn’t a real thing, but that we don’t really have the software out there to do AR things, and so while theoretically these could be used for augmented reality, that’s not their actual, 2026 use case. But, since the industry uses it, I will. These tend to display an image covering part of one’s visual field which one can see around and maybe through. Xreal’s offerings are an example.

    • HUD glasses. These have a much more limited display, or maybe none at all. These are aimed at letting one record what one is looking at less-obtrusively, maybe throw up notifications from a phone silently, things like the Ray-Ban Meta.

    • Movie-viewers. These things are designed around isolation, but don’t need head-tracking. They may be fine with relatively-low resolution or sharpness. A Royole Moon, for example.

    For me, the most-exciting prospect for HMDs is the idea of a monitor replacement. That is, I’d be most-interested in something that does basically what my existing displays do, but in a lower-power, more-portable, more-private form. If it can also do VR, that’d be frosting on the cake, but I’m really principally interested in something that can be a traditional monitor, but better.

    For me, at least, none of the use cases for the above classes of HMDs are super-compelling.

    For movie-viewing. It just isn’t that often that I feel that I need more isolation than I can already get to watch movies. A computer monitor in a dark room is just fine. I can also put things on a TV screen or a projector that I already have sitting around and I generally don’t bother to turn on. If I want to block out outside sound more, I might put on headphones, but I just don’t need more than that. Maybe for someone who is required to be in noisy, bright environments or something, but it just isn’t a real need for me.

    For HUD glasses, I don’t really have a need for more notifications in my field of vision — I don’t need to give my phone a HUD.

    AR could be interesting if the augmented reality software library actually existed, but in 2026, it really doesn’t. Today, AR glasses are mostly used, as best I can tell, as an attempt at a monitor replacement, but the angular pixel density on them is poor compared to conventional displays. Like, in terms of the actual data that I can shove into my eyeballs in the center of my visual field, which is what matters for things like text, I’m better off with conventional monitors in 2026.

    VR gaming could be interesting, but the benefits just aren’t that massive for the games that I play. You get a wider field of view than a traditional display offers, the ability to use your head as an input for camera control. There are some genres that I think that it works well with today, like flight sims. If you were a really serious flight-simmer, I could see it making sense. But most genres just don’t benefit that much from it. Yeah, okay, you can play Tetris Effect: Connected in VR, but it doesn’t really change the game all that much.

    A lot of the VR-enabled titles out there are not (understandably, given the size of the market) really principally aimed at taking advantage of the goggles. You’re basically getting a port of a game aimed at probably a keyboard and mouse, with some tradeoffs.

    And for VR, one has to deal with more setup time, software and hardware issues, and the cost. I’m not terribly price sensitive on gaming compared to most, but if I’m getting a peripheral for, oh, say, $1k, I have to ask how seriously I’m going to play any of the games that I’m buying this hardware for. I have a HOTAS system with flight pedals; it mostly just gathers dust, because I don’t play many WW2 flight sims these days, and the flight sims out there today are mostly designed around thumbsticks. I don’t need to accumulate more dust-collectors like that. And with VR the hardware ages out pretty quickly. I can buy a conventional monitor today and it’ll still be more-or-less competitive for most uses probably ten or twenty years down the line. VR goggles? Not so much.

    At least for me, the main things that I think that I’d actually get some good out of VR goggles on:

    • Vertical-orientation games. My current monitors are landscape aspect ratio, and don’t support rotating (though I imagine that there might be someone that makes a rotating VESA mount pivot, and I could probably use wlr-randr to make Wayland change the display orientation manually) Some games in the past in arcades had something like a 3:4 portrait mode aspect ratio. If you’re playing one of those, you could maybe get some extra vertical space. But unless I need the resolution or portability, I can likely achieve something like that by just moving my monitor closer while playing such a game.

    • Pinball sims, for the same reason.

    • There are a couple of VR-only games that I’d probably like to play (none very new).

    • Flight sims. I’m not really a super-hardcore flight simmer. But, sure, for WW2 flight sims or something like Elite: Dangerous, it’s probably nice.

    • I’d get a little more immersiveness out of some games that are VR-optional.

    But…that’s just not that overwhelming a set of benefits to me.

    Now, I am not everyone. Maybe other people value other things. And I do think that it’s possible to have a “killer app” for VR, some new game that really takes advantage of VR and is so utterly compelling that people feel that they’d just have to get VR goggles so as to not miss out. Something like what World of Warcraft did for MMO gaming, say. But the VR gaming effort has been going on for something like a decade now, and nothing like that has really turned up.



  • tal@lemmy.todaytoGames@lemmy.worldr/Silksong joins lemmy!
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 days ago

    Plus, I mean, unless you’re using a Threadiverse host as your home instance, how often are you typing its name?

    Having a hyphen is RFC-conformant:

    RFC 952:

    1. A "name" (Net, Host, Gateway, or Domain name) is a text string up
    to 24 characters drawn from the alphabet (A-Z), digits (0-9), minus
    sign (-), and period (.).  Note that periods are only allowed when
    they serve to delimit components of "domain style names". (See
    RFC-921, "Domain Name System Implementation Schedule", for
    background).  No blank or space characters are permitted as part of a
    name. No distinction is made between upper and lower case.  The first
    character must be an alpha character.  The last character must not be
    a minus sign or period.  A host which serves as a GATEWAY should have
    "-GATEWAY" or "-GW" as part of its name.  Hosts which do not serve as
    Internet gateways should not use "-GATEWAY" and "-GW" as part of
    their names. A host which is a TAC should have "-TAC" as the last
    part of its host name, if it is a DoD host.  Single character names
    or nicknames are not allowed.
    

    RFC 1123:

       The syntax of a legal Internet host name was specified in RFC-952
       [DNS:4].  One aspect of host name syntax is hereby changed: the
       restriction on the first character is relaxed to allow either a
       letter or a digit.  Host software MUST support this more liberal
       syntax.
    
       Host software MUST handle host names of up to 63 characters and
       SHOULD handle host names of up to 255 characters.
    


  • Unless you have some really serious hardware, 24 billion parameters is probably the maximum that would be practical for self-hosting on a reasonable hobbyist set-up.

    Eh…I don’t know if you’d call it “really serious hardware”, but when I picked up my 128GB Framework Desktop, it was $2k (without storage), and that box is often described as being aimed at the hobbyist AI market. That’s pricier than most video cards, but an AMD Radeon RX 7900 XTX GPU was north of $1k, an NVidia RTX 4090 was about $2k, and it looks like the NVidia RTX 5090 is presently something over $3k (and rising) on EBay, well over MSRP. None of those GPUs are dedicated hardware aimed at doing AI compute, just high-end cards aimed at playing games that people have used to do AI stuff on.

    I think that the largest LLM I’ve run on the Framework Desktop was a 106 billion parameter GLM model at Q4_K_M quantization. It was certainly usable, and I wasn’t trying to squeeze as large a model as possible on the thing. I’m sure that one could run substantially-larger models.

    EDIT: Also, some of the newer LLMs are MoE-based, and for those, it’s not necessarily unreasonable to offload expert layers to main memory. If a particular expert isn’t being used, it doesn’t need to live in VRAM. That relaxes some of the hardware requirements, from needing a ton of VRAM to just needing a fair bit of VRAM plus a ton of main memory.


  • That’s why they have the “Copilot PC” hardware requirement, because they’re using an NPU on the local machine.

    searches

    https://learn.microsoft.com/en-us/windows/ai/npu-devices/

    Copilot+ PCs are a new class of Windows 11 hardware powered by a high-performance Neural Processing Unit (NPU) — a specialized computer chip for AI-intensive processes like real-time translations and image generation—that can perform more than 40 trillion operations per second (TOPS).

    It’s not…terribly beefy. Like, I have a Framework Desktop with an APU and 128GB of memory that schlorps down 120W or something, substantially outdoes what you’re going to do on a laptop. And that in turn is weaker computationally than something like the big Nvidia hardware going into datacenters.

    But it is doing local computation.


  • I’m kind of more-sympathetic to Microsoft than to some of the other companies involved.

    Microsoft is trying to leverage the Windows platform that they control to do local LLM use. I’m not at all sure that there’s actually enough memory out there to do that, or that it’s cost-effective to put a ton of memory and compute capacity in everyone’s home rather than time-sharing hardware in datacenters. Nor am I sold that laptops — which many “Copilot PCs” are — are a fantastic place to be doing a lot of heavyweight parallel compute.

    But…from a privacy standpoint, I kind of would like local LLMs to be at least available, even if they aren’t as affordable as cloud-based stuff. And at least Microsoft is at least supporting that route. A lot of companies are going to be oriented towards just doing AI stuff in the cloud.


  • You only need one piece of (timeless) advice regarding what to look for, really: if it looks too good to be true, it almost certainly is. Caveat emptor.

    I mean…normally, yes, but because the situation has been changing so radically in such a short period of time, it probably is possible to get some bonkers deals in various niches, because the market hasn’t stabilized yet.

    Like, a month and a half back, in early December, when prices had only been going up like crazy for a little while, I was posting some tiny retailers that still had RAM in stock at pre-price-increase rates that I could find on Google Shopping. IIRC the University of Virginia bookstore was one, as they didn’t check that purchasers were actually students. I warned that they’d probably be cleaned out as soon as scalpers got to them, and that if someone wanted memory, they should probably get it ASAP. Some days prior to that, there was a small PC parts store in Hawaii that had some (though that was out of stock by the next time I was looking and mentioned the bookstore).

    That’s not to disagree with the point that @[email protected] is making, that this was awfully sketchy as a source, or your point that scavenging components off even a non-scam piece of secondhand non-functional hardware is risky. But in times of rapid change, it’s not impossible to find deals. In fact, it’s various parties doing so that cause prices to stabilize — anyone selling memory for way below market price is going to have scalpers grab it.


  • I’m not really a hardware person, but purely in terms of logic gates, making a memory circuit isn’t going to be hard. I mean, a lot of chips contain internal memory. I’m sure that anyone that can fabricate a chip can fabricate someone’s memory design that contains some amount of memory.

    For PC use, there’s also going to be some interface hardware. Dunno how much sophistication is present there.

    I’m assuming that the catch is that it’s not trivial to go out and make something competitive with what the PC memory manufacturers are making in price, density, and speed. Like, I don’t think that if you want to get a microcontroller with 32 kB of onboard memory, that it’s going to be a problem. But that doesn’t really replace the kind of stuff that these guys are making.

    EDIT: The other big thing to keep in mind is that this is a short-term problem, even if it’s a big problem. I mean, the problem isn’t the supply of memory over the long term. The problem is the supply of memory over the next couple of years. You can’t just build a factory and hire a workforce and get production going the moment that someone decides that they want several times more memory than the world has been producing to date.

    So what’s interesting is really going to be solutions that can produce memory in the near term. Like, I have no doubt that given years of time, someone could set up a new memory manufacturer and facilities. But to get (scaled-up) production in a year, say? Fewer options there.





  • There might be some way to make use of it.

    Linux apparently can use VRAM as a swap target:

    https://wiki.archlinux.org/title/Swap_on_video_RAM

    So you could probably take an Nvidia H200 (141 GB memory) and set it as a high-priority swap partition, say.

    Normally, a typical desktop is liable to have problems powering an H200 (600W max TDP), but that’s with all the parallel compute hardware active, and I assume that if all you’re doing is moving stuff in and out of memory, it won’t use much power, same as a typical gaming-oriented GPU.

    That being said, it sounds like the route on the Arch Wiki above is using vramfs, which is a FUSE filesystem, which means that it’s running in userspace rather than kernelspace, which probably means that it will have more overhead than is really necessary.

    EDIT: I think that a lot will come down to where research goes. If it turns out that someone figures out that changing the hardware (having a lot more memory, adding new operations, whatever) dramatically improves performance for AI stuff, I suspect that current hardware might get dumped sooner rather than later as datacenters shift to new hardware. Lot of unknowns there that nobody will really have the answers to yet.

    EDIT2: Apparently someone made a kernel-based implementation for Nvidia cards to use the stuff directly as CPU-addressable memory, not swap.

    https://github.com/magneato/pseudoscopic

    In holography, a pseudoscopic image reverses depth—what was near becomes far, what was far becomes near. This driver performs the same reversal in compute architecture: GPU memory, designed to serve massively parallel workloads, now serves the CPU as directly-addressable system RAM.

    Why? Because sometimes you have 16GB of HBM2 sitting idle while your neural network inference is memory-bound on the CPU side. Because sometimes constraints breed elegance. Because we can.

    Pseudoscopic exposes NVIDIA Tesla/Datacenter GPU VRAM as CPU-addressable memory through Linux’s Heterogeneous Memory Management (HMM) subsystem. Not swap. Not a block device. Actual memory with struct page backing, transparent page migration, and full kernel integration.

    I’d guess that that’ll probably perform substantially better.

    It looks like they presently only target older cards, though.


  • This world is getting dumber and dumber.

    Ehhh…I dunno.

    Go back 20 years and we had similar articles, just about the Web, because it was new to a lot of people then.

    searches

    https://www.belfasttelegraph.co.uk/news/internet-killed-my-daughter/28397087.html

    Internet killed my daughter

    https://archive.ph/pJ8Dw

    Were Simon and Natasha victims of the web?

    https://archive.ph/i9syP

    Predators tell children how to kill themselves

    And before that, I remember video games.

    It happens periodically — something new shows up, and then you’ll have people concerned about any potential harm associated with it.

    https://en.wikipedia.org/wiki/Moral_panic

    A moral panic, also called a social panic, is a widespread feeling of fear that some evil person or thing threatens the values, interests, or well-being of a community or society.[1][2][3] It is “the process of arousing social concern over an issue”,[4] usually elicited by moral entrepreneurs and sensational mass media coverage, and exacerbated by politicians and lawmakers.[1][4] Moral panic can give rise to new laws aimed at controlling the community.[5]

    Stanley Cohen, who developed the term, states that moral panic happens when “a condition, episode, person or group of persons emerges to become defined as a threat to societal values and interests”.[6] While the issues identified may be real, the claims “exaggerate the seriousness, extent, typicality and/or inevitability of harm”.[7] Moral panics are now studied in sociology and criminology, media studies, and cultural studies.[2][8] It is often academically considered irrational (see Cohen’s model of moral panic, below).

    Examples of moral panic include the belief in widespread abduction of children by predatory pedophiles[9][10][11] and belief in ritual abuse of women and children by Satanic cults.[12] Some moral panics can become embedded in standard political discourse,[2] which include concepts such as the Red Scare[13] and terrorism.[14]

    Media technologies

    Main article: Media panic

    The advent of any new medium of communication produces anxieties among those who deem themselves as protectors of childhood and culture. Their fears are often based on a lack of knowledge as to the actual capacities or usage of the medium. Moralizing organizations, such as those motivated by religion, commonly advocate censorship, while parents remain concerned.[8][40][41]

    According to media studies professor Kirsten Drotner:[42]

    [E]very time a new mass medium has entered the social scene, it has spurred public debates on social and cultural norms, debates that serve to reflect, negotiate and possibly revise these very norms.… In some cases, debate of a new medium brings about – indeed changes into – heated, emotional reactions … what may be defined as a media panic.

    Recent manifestations of this kind of development include cyberbullying and sexting.[8]

    I’m not sure that we’re doing better than people in the past did on this sort of thing, but I’m not sure that we’re doing worse, either.


  • tal@lemmy.todaytoComic Strips@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    56
    arrow-down
    4
    ·
    edit-2
    8 days ago

    https://en.wikipedia.org/wiki/We_Didn't_Start_the_Fire

    “We Didn’t Start the Fire” is a song written by American musician Billy Joel.

    Joel conceived the idea for the song when he had just turned 40. He was in a recording studio and met a 21-year-old friend of Sean Lennon who said “It’s a terrible time to be 21!”. Joel replied: “Yeah, I remember when I was 21 – I thought it was an awful time and we had Vietnam, and y’know, drug problems, and civil rights problems and everything seemed to be awful”. The friend replied: “Yeah, yeah, yeah, but it’s different for you. You were a kid in the fifties and everybody knows that nothing happened in the fifties”. Joel retorted: “Wait a minute, didn’t you hear of the Korean War or the Suez Canal Crisis?” Joel later said those headlines formed the basic framework for the song.[4]

    https://www.youtube.com/watch?v=eFTLKWw542g

    🎵 We didn’t start the fire 🎵
    🎵 It was always burning since the world’s been turning 🎵
    🎵 We didn’t start the fire 🎵
    🎵 No, we didn’t light it, but we tried to fight it 🎵






  • The point I’m making is that bash is optimized for quickly writing throwaway code. It doesn’t matter if the code written blows up in some case other than the one you’re using. You don’t need to handle edge cases that don’t apply to the one time that you will run the code. I write lots of bash code that doesn’t handle a bunch of edge cases, because for my one-off use, that edge case doesn’t arise. Similarly, if an LLMs is generating code that misses some edge case, if it’s a situation that will never arise, and that may not be a problem.

    EDIT: I think maybe that you’re misunderstanding me as saying “all bash code is throwaway”, which isn’t true. I’m just using it as an example where throwaway code is a very common, substantial use case.


  • I don’t know: it’s not just the outputs posing a risk, but also the tools themselves

    Yeah, that’s true. Poisoning the training corpus of models is at least a potential risk. There’s a whole field of AI security stuff out there now aimed at LLM security.

    it shouldn’t require additional tools, checking for such common flaws.

    Well, we are using them today for human programmers, so… :-)