Generative “AI” data centers are gobbling up trillions of dollars in capital, not to mention heating up the planet like a microwave. As a result there’s a capacity crunch on memory production, shooting the prices for RAM sky high, over 100 percent in the last few months alone. Multiple stores are tired of adjusting the prices day to day, and won’t even display them. You find out how much it costs at checkout.

  • SabinStargem@lemmy.today
    link
    fedilink
    English
    arrow-up
    14
    ·
    8 hours ago

    I think that in the long run, the RAM shortage will turn into a glut of much faster and larger DDR5 RAM sticks. Provided if you can wait for the transition to AM6, an AM5 endgame system will have pretty good RAM.

    • BakerBagel@midwest.social
      link
      fedilink
      English
      arrow-up
      12
      ·
      8 hours ago

      They are going to pivot all that processing to the next snale oil scheme. Do you think its a coincidence that rhe AI hype came immediately after crypto crashed?

      • SabinStargem@lemmy.today
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        14
        ·
        8 hours ago

        I view AI to be like the printing press: It is good for the everyman…if that everyman was willing to own and make use of it. By ceding AI to oligarchs, society would be allowing the 1% to have more tools to do stuff, while denying the public from making effective use of them.

        The answer isn’t to reject AI, but to fund publicly developed and owned AI. Every minority who has 95% of Disney’s legal acumen in their pocket, will be able to more effectively resist Kavenaugh Stops in court. An AI can scour the web and spot discounted goods that a person actually wants, and create a shopping list that is cheap and convenient. People can have a competent teacher, if their rural household lacks a school. All these things lend a little extra agency to ordinary people.

        My point, is that we shouldn’t refuse tools. Instead, we should adopt them on OUR terms, not the techbro’s.

        • xcjs@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 hours ago

          I have a similar perspective. I built my own in-home AI server because I assumed if the technology had any staying power, I better learn how it works to some degree and see if I can run it myself.

        • Trainguyrom@reddthat.com
          link
          fedilink
          English
          arrow-up
          5
          ·
          5 hours ago

          This assumes machine learning models are able to get better than they currently are. Newer models have been plateauing in quality of outputs (improvements have been noticable in video and image generation, but even that is slowing down)

          I don’t think we’re going to see machine learning models that perform well enough to create printing press level change to the world

        • buddascrayon@lemmy.world
          link
          fedilink
          English
          arrow-up
          19
          arrow-down
          1
          ·
          edit-2
          7 hours ago

          LoL, LLM’s aren’t capable of being “competent” at anything. Not law, not teaching, not even coding. They are pure garbage at nearly everything they are applied to. Yes, some of these things have had some limited success at finding patterns in the noise. But those successes are grossly outweighed by the absolute failure of them to do anything else.

          https://tech.co/news/list-ai-failures-mistakes-errors

          https://www.allaboutai.com/resources/ai-statistics/ai-bias/

          https://www.independent.co.uk/news/uk/home-news/ai-factual-errors-chatgpt-gemini-copilot-b2867620.html

          • SabinStargem@lemmy.today
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            12
            ·
            edit-2
            6 hours ago

            AI is an technology, and like any technology, it improves. The AI we had two years ago was something akin to the Orville flier, the ones we have now are equivalent of a biplane. Those examples of technology weren’t very useful, but the planes that followed were far more capable and economical.

            Your assertions that AI is useless, is merely burying your head in the sand and hoping things will go alright. The outright refusal of AI by people like you, only ensures the most evil people can use it. This is like only allowing Nazis to own guns, peasants not being allowed to own land, or newspapers to only be owned by the wealthiest.

            It is power that you are giving up, and power doesn’t care about who has it.

            • drosophila@lemmy.blahaj.zone
              link
              fedilink
              English
              arrow-up
              4
              ·
              edit-2
              1 hour ago

              Hallucinations are an intrinsic part of how LLMs work. OpenAI, literally the people with the most to lose if LLMs aren’t useful, has admitted that hallucinations are a mathematical inevitability, not something that can be engineered around. On top of that, its been shown that for things like mathematical proof finding switching to more sophisticated models doesn’t make them more accurate, it just makes their arguments more convincing.

              Now, you might say “oh but you can have a human in the loop to check the AIs work”, but for programming tasks its already been found that using LLMs makes programmers less productive. If a human needs to go over everything an AI generates, and reason about it anyway, that’s not really saving time or effort. Now consider that as you make the LLM more complex, having it generate longer and more complicated blocks of text, its errors also become harder to detect. Is that not just shuffling around the necessary human brainpower for a task instead of reducing it?

              So, in what field is this sort of thing useful? At one point I was hopeful that LLMs could be used in text summarization, but if I have to read the original text anyway to make sure that I haven’t been fed some highly convincing falsehood then what is the point?

              Currently I’m of the opinion that we might be able to use specialized LLMs as a heuristic to narrow the search tree for things like SAT solvers and answer set generators, but I don’t have much optimism for other use cases.