• danhab99@programming.dev
    link
    fedilink
    English
    arrow-up
    51
    ·
    6 days ago

    I feel like literally everybody knew it was a bubble when it started expanding and everyone just kept pumping into it.

    How many tech bubbles do we have to go through before we leave our lesson?

    • sibachian@lemmy.ml
      link
      fedilink
      English
      arrow-up
      43
      ·
      6 days ago

      what lesson? it’s a ponzi scheme and whoever is the last holding the bag is the only one losing.

      • 123@programming.dev
        link
        fedilink
        English
        arrow-up
        9
        ·
        6 days ago

        Plus everyone else that pays taxes as they will have to continue to pay for unemployment insurance, food stamps, rent assistance, etc (not the CEOs and execs that caused it that’s for sure).

      • squaresinger@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        6 days ago

        And that’s why it’s being done. Everyone hopes that they make it out at just the right time to make millions while the greater fools who join too late are left holding the bag.

        Bubbles are great. For those who make it out in time. They suck fo everyone else including the taxpayer who might have to bail out companies and investors.

        Always following the doctrine of privatizing profits and socializing losses.

    • belit_deg@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      ·
      edit-2
      6 days ago

      I get that people who sell AI-services wants to promote it. That part is obvious.

      What I don’t get is how gullible the rest of society at large is. Take the norwegian digitalization minister, who says that 80% of the public sector shall use AI. Whatever that means.

      Or building a gigantic fuckoff openai data centre, instead of new industry https://openai.com/nb-NO/index/introducing-stargate-norway/

      Jared Diamond had a great take on this in “Collapse”. That there a countless examples of societies making awful decisions - because the decisionmakers are insulated from the consequences. On the contrary, they get short term gains.

      • Saleh@feddit.org
        link
        fedilink
        English
        arrow-up
        3
        ·
        6 days ago

        We know that our current way of economic growth and consistent new “inventions” is destroying the basis of our life. We know that the only way to stop is to fundamentally redesign the social system, moving away from capitalism, growth economics and ever new gadgets.

        But facing this is difficult. Facing this and winning elections with it is even more difficult. Instead claiming there is some wonder technology that will safe us all and putting the eggs in that basket is much easier. It will fail inevitably, but until then it is easier.

    • HugeNerd@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 days ago

      Never. Some people think the universe owes us Star Trek and are just waiting for something new to happen.

    • Tollana1234567@lemmy.today
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 days ago

      the ceos, C-SUITES and some people trying to get into CS field are the one that believe in it. i know a person who already has a degree, and sitll think its wise to pursue a GRAD degree in the field adjacent or directly with AI or close to it.

      • nagaram@startrek.website
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 days ago

        A grad course in AI/LLM/ML might actually be useful. Its where my old roommates learned about Googles Transformers and got into LLMs before the hype bubble in 2018.

        Home might get ahead of the curve for the next over inflated hype bubble and then proceed to make unearned garbage loads of money and have learned something other than how to put ChatGPT in a new wrapper.

  • Tattorack@lemmy.world
    link
    fedilink
    English
    arrow-up
    72
    ·
    edit-2
    7 days ago

    SSSSIIIIIIIGGGGGGHHHHHHHHHHH…

    Looks like I’ll have to prepare for yet another once-in-a-lifetime economic collapse.

  • HugeNerd@lemmy.ca
    link
    fedilink
    English
    arrow-up
    17
    ·
    6 days ago

    You don’t believe in the quantum block chain 3D printed AI cloud future mining asteroids for the private Mars colony (yet with no life extension)?

    Luddite.

    • vacuumflower@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      4
      ·
      6 days ago

      Quantum was popular as “oh god, our cryptography will die, what are we going to do”. Now post-quantum cryptography exists and it doesn’t seem to be clear what else quantum computers are useful for, other than PR.

      Blockchain was popular when the supply of cryptocurrencies was kinda small, now there’s too many of them. And also its actually useful applications require having offline power to make decisions. Go on, tell politicians in any country that you want electoral system exposed and blockchain-based to avoid falsifications. LOL. They are not stupid. If you have a safe electoral system, you can do with much more direct democracy. Except blockchain seems a bit of an overkill for it.

      3D printing is still kinda cool, except it’s just one tool among others. It’s widely used to prototype combat drones and their ammunition. The future is here, you just don’t see it.

      Cloud - well, bandwidths allowed for it and it’s good for companies, so they advertised it. Except even in the richest countries Internet connectivity is not a given, and at some point wow-effect is defeated by convenience. It’s just less convenient to use cloud stuff, except for things which don’t make sense without cloud stuff. Like temporary collaboration on a shared document.

      “AI” - they’ve ran out of stupid things to do with computers, so they are now promising the ultimate stupid thing. They don’t want smart things, smart things are smart because they change the world, killing monopolies and oligopolies along the way.

      • HugeNerd@lemmy.ca
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        6 days ago

        No room-temperature superconductor fusion reactors, space-based solar, or private space mining? Luddite.

        • vacuumflower@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          2
          ·
          6 days ago

          #1 is like tactical nuke tech available for all civilians, #2 would make sense if all the production line and consumers are in space too, #3 would make sense as part of the same.

          Earth gravity well is a bitch. We live in it. Sending stuff up is expensive, sending stuff down is stupid when it’s needed up there, but without some critical complete piece of civilization to send up at once, you’ll have to send stuff up all the time.

          It’s too expensive and the profits are transcendent, as in “ideological achievement and because we can”. Also they may eventually start sending nukes down.

          Thus it all makes sense only when we can build and equip an autonomous colony to send at once. Self-reliant with the condition that they will get needed materials from wherever they are sent.

          I suggest something with gravity though. Europa or Ganymede or Enceladus. Something like that.

          • HugeNerd@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            3
            ·
            6 days ago

            Are you a Space Nutter?

            It’s not going to happen. No one is going to move to space or send nukes down or mine asteroids.

            Ever.

            • BananaIsABerry@lemmy.zip
              link
              fedilink
              English
              arrow-up
              2
              ·
              6 days ago

              Are you a round earth nutter?

              It’s not going to happen. No one is going to get past the edge of the world or sail the whole world or find new land.

              Ever.

              • HugeNerd@lemmy.ca
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                2
                ·
                6 days ago

                If you don’t see how that’s a completely dumb comparison, this is hopeless. I’m reality-based, you are not.

                • BananaIsABerry@lemmy.zip
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  ·
                  6 days ago

                  Sure, friend. You can see reality thousands of years into the future and know exactly what happens.

                  My bad.

                • vacuumflower@lemmy.sdf.org
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  6 days ago

                  I disagree. It just won’t be fancy. It has to be an enormous project with existential risks. And you have to really send many people at once with no return ticket. “At once” is important, you can’t ramp it up, that’s far more expensive. It has to be a mission very deeply planned in detail with plenty of failsafe paths, aimed at building a colony that can be maintained with Earth’s teaching resources, technologies and expertise, and locally produced and processed materials for everything. So - something like that won’t happen anytime soon, but at some point it will happen.

                  The technologies necessary have to be perfected first, computing should stop being the main tool for hype, and the societies should adapt culturally for computing and worldwide connectivity.

                  These take centuries. In those centuries we’ll be busy with plenty of things existential, like avoiding the planet turning into one big 70s Cambodia.

      • HereIAm@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 days ago

        Quantum computing has incredible value as a scientific tool, what are you talking about.

  • yarr@feddit.nl
    link
    fedilink
    English
    arrow-up
    32
    arrow-down
    2
    ·
    7 days ago

    Everyone knows a bubble is a firm foundation to build upon. Now that Trump is back in office and all our American factories are busy cranking out domestic products I can finally be excited about the future again!

    I predict that in a year this bubble will be at least twice as big!

      • Dogiedog64@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        7 days ago

        Yup. If you have money you can AFFORD TO BURN, go ahead and short to your heart’s content. Otherwise, stay clear and hedge your bets.

    • whyrat@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      7 days ago

      The question is when, not if. But it’s an expensive question to guess the “when” wrong. I believe the famous idiom is: the market can stay irrational longer than you can stay solvent.

      Best of luck!

  • Vinstaal0@feddit.nl
    link
    fedilink
    English
    arrow-up
    10
    ·
    7 days ago

    Not only the tech bubble is doing that.

    It’s also the tech bubble ow and the pyramide scheme of the US housing sector will cause more financial issues as well and so is the whole creditcard system

  • Dr. Moose@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    15
    ·
    edit-2
    7 days ago

    Willing to take real life money bet that bubble is not going to pop despite Lemmy’s obsession here. The value is absolutely inflated but it’s definitely real value and LLMs are not going to disappear unless we create a better AI technology.

    In general we’re way past the point of tech bubbles popping. Software markets move incredibly fast and are incredibly resilient to this. There literally hasn’t been a software bubble popping since dotcom boom. Prove me wrong.

    Even if you see problems with LLMs and AI in general this hopeful doomerism is really not helping anyone. Now instead of spending effort on improving things people are these angry, passive, delusional accelerationists without any self awareness.

    • SwingingTheLamp@midwest.social
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      1
      ·
      7 days ago

      I get the thinking here, but past bubbles (dot com, housing) were also based on things that have real value, and the bubble still popped. A bubble, definitionally, is when something is priced far above its value, and the “pop” is when prices quickly fall. It’s the fall that hurts; the asset/technology doesn’t lose its underlying value.

    • Encrypt-Keeper@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      7 days ago

      I mean we haven’t figured out how to make AI profitable yet, and though it’s a cool technology with real world use cases, nobody has proven yet that the juice is worth the squeeze. There’s an unimaginable amount of money tied up in a technology on the hope that one day they find a way to make it profitable and though AI as a technology “improves”, its journey towards providing more value than it costs to run is not.

      If I roleplayed as somebody who desperately wanted AI to succeed, my first question would be “What is the plan to have AI make money?” And so far nobody, not even the technology’s biggest sycophants have an answer.

        • Encrypt-Keeper@lemmy.world
          link
          fedilink
          English
          arrow-up
          11
          ·
          edit-2
          7 days ago

          AI as a technology is so far not profitable for anybody. The hardware AI runs on is profitable, as might be some start ups that are heavily leveraging AI, but actually operating AI is so far not profitable, and because increasingly smaller improvements in AI use exponentially more power, there’s no real path that is visible to any of us today that suggests anyone’s yet found a path to profitability. Aside from some kind of miracle out of left field that no one today has even conceived, the long term outlook isn’t great.

          If AI as a technology busts, so does the insane profits behind the hardware it runs on. And without that left field technological breakthrough, the only option to pursue to avoid AI going completely bust is to raise prices astronomically, which would bust any companies currently dependent on all the AI currently being provided to them for basically next to nothing.

          The entire industry is operating at a loss, but is being propped up by the currently abstract idea that AI will some day make money. This isn’t the “AI Hater” viewpoint, it’s just the spot AI is currently in. If you think AI is here to stay, you’re placing a bet on a promise that nobody as of today can actually make.

            • Encrypt-Keeper@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              ·
              edit-2
              6 days ago

              Delusion? Ok let’s get it straight from the horse’s mouth then. I’ve asked ChatGPT if OpenAI is profitable, and to explain its financial outlook. What you see below, emphasis and emojis, are generated by ChatGPT:

              —ChatGPT—

              OpenAI is not currently profitable. Despite its rapid growth, the company continues to operate at a substantial loss.

              📊 Financial Snapshot

              • Annual recurring revenue (ARR) was reported at approximately $12 billion as of July 2025, implying around $1 billion per month in revenue.

              • Projected total revenue for 2025 is $12.7 billion, up from roughly $3.7 billion in 2024.

              • However, OpenAI’s cash burn has increased, with projected operational losses around $8 billion in 2025 alone

              —end ChatGPT—

              The most favorable projections are that OpenAI will not be cash positive (That means making a single dollar in profit) until it reached 129 billion dollars in revenue. That means that OPENAI has to make more than 10X their annual revenue to finally be profitable. And their current strategy to make more money is to expand their infrastructure to take on more customers and run more powerful systems. The problem is, the models require substantially more power to make moderate gains in accurate and capability. And every new AI datacenter means more land cost, engineers, water, and electricity. Compounding the issue is that the more electricity they use, the more it costs. NJ has paved the way for a number of new huge AI datacenters in the past few years and the cost of electricity in the state has skyrocketed. People are seeing their monthly electric bills raised by 50-150% in the last couple months alone. Thats forcing not only people out of their homes, but eats substantially into revenue growth for data centers. It’s quite literally a race for AI companies to reach profitability before hitting the natural limits to the resources they require to expand. And I haven’t heard a peep about how they expect to do so.

              • Dr. Moose@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                5
                ·
                6 days ago

                You use one company thats is spearheading the entire industry as your example that no AI company is profitable. Either you are argueing in extremely bad faith or you’re invredibly stupid I’m sorry.

                • Encrypt-Keeper@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  arrow-down
                  1
                  ·
                  edit-2
                  6 days ago

                  Of course I used the company that is the market leader in AI as an example that AI companies are not profitable you donut, that’s how that works.

                  They’re not the only AI company that’s not profitable, like I said none of them are. You can take your pick if you don’t like OpenAI as an example.

        • Frezik@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          5
          ·
          6 days ago

          Who is it profitable for right now? The only ones I see are the ones selling shovels in a gold rush, like Nvidia.

          • Dr. Moose@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            3
            ·
            6 days ago

            Every AI software company? So much ignorance in this thread its almost impossible to respond to. Llm queries are super cheap already and very much profitable.

    • WhirlpoolBrewer@lemmings.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      7 days ago

      In a capitalist society, what is good or best is irrelevant. All that matters is if it makes money. AI makes no money. The $200 and $300/month plans put in rate limits because at those prices they’re losing too much money. Lets say the beak-even cost for a single request is somewhere between $1-$5 depending on the request just for the electricity, and people can barely afford food, housing, and transportation as it is. What is the business model for these LLMs going to be? A person could get a coffee today, or send a single request to an LLM? Now start thinking that they’ll need newer gpus next year. And the year after that. And after that. And the data center will need maintenance. They’re paying literally millions of dollars to individual programmers.

      Maybe there is a niche market for mega corporations like Google who can afford to spend thousands of dollars a day on LLMs, but most companies won’t be able to afford these tools. Then there is the problem where if the company can afford these tools, do they even need them?

      The only business model that makes sense to me is the one like BMW uses for their car seat warmers. BMW requires you to pay a monthly subscription to use the seat warmers in their cars. LLM makers could charge a monthly subscription to run a micro model on your own device. That free assistant in your Google phone would then be pay walled. That way businesses don’t need to carry the cost of the electricity, but the LLM is going to be fairly low functioning compared to what we get for free today. But the business model could work. As long as people don’t install a free version.

      I don’t buy the idea that “LLMs are good so they are going to be a success”. Not as long as investors want to make money on their investments.

      • bridgeenjoyer@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        6 days ago

        I imagine a dystopia where the main internet has been destroyed and watered down so you can only access it through a giant corpo llm (isps will become llmsps) So you choose between watching an ai generated movie for entertainment or a coffee. Because they will destroy the internet any way they can.

        Also they’ll charge more for prompts related to things you like. Its all there for the plundering, and consumers want it.

      • General_Effort@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 days ago

        Lets say the beak-even cost for a single request is somewhere between $1-$5 depending on the request just for the electricity,

        Are you baiting the fine people here?

      • lacaio da inquisição@mander.xyz
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 days ago

        I believe that if something has enough value, people are willing to pay for it. And by people here I mean primarily executives. The problem is that AI has not enough value to sustain the hype.

      • Dr. Moose@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        10
        ·
        7 days ago

        people can barely afford food, housing, and transporation as it is.

        Citation needed. The doomerism in this thread is so cringe.

    • GamingChairModel@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      6 days ago

      The value a thing creates is only part of whether the investment into it is worth it.

      It’s entirely possible that all of the money that is going into the AI bubble will create value that will ultimately benefit someone else, and that those who initially invested in it will have nothing to show for it.

      In the late 90’s, U.S. regulatory reform around telecom prepared everyone for an explosion of investment in hard infrastructure assets around telecommunications: cell phones were starting to become a thing, consumer internet held a ton of promise. So telecom companies started digging trenches and laying fiber, at enormous expense to themselves. Most ended up in bankruptcy, and the actual assets eventually became owned by those who later bought those assets for pennies on the dollar, in bankruptcy auctions.

      Some companies owned fiber routes that they didn’t even bother using, and in the early 2000’s there was a shitload of dark fiber scattered throughout the United States. Eventually the bandwidth needs of near universal broadband gave that old fiber some use. But the companies that built it had already collapsed.

      If today’s AI companies can’t actually turn a profit, they’re going to be forced to sell off their expensive data at some point. Maybe someone else can make money with it. But the life cycle of this tech is much shorter than the telecom infrastructure I was describing earlier, so a stale LLM might very well become worthless within years. Or it’s only a stepping stone towards a distilled model that costs a fraction to run.

      So as an investment case, I’m not seeing a compelling case for investing in AI today. Even if you agree that it will provide value, it doesn’t make sense to invest $10 to get $1 of value.

      • Tollana1234567@lemmy.today
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        6 days ago

        dint microsoft already admitted thier AI isnt profitable, i suspect thats why they have been laying off in waves. they are hoping govt contracts will stem the bleeding or hold them off, and they found the sucker who will just do it, trump. I wonder if palintir is suffeing too, surely thier AI isnt as useful to the military as they claim.

    • chobeat@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      6
      ·
      7 days ago

      there’s an argument that this is just the targeted ads bubble that keeps inflating using different technologies. That’s where the money is coming from. It’s a game of smoke and mirrors, but this time it seems like they are betting big on a single technology for a longer time, which is different from what we have seen in the past 10 years.

    • Frezik@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      2
      ·
      6 days ago

      LLMs can absolutely disappear as a mass market technology. They will always exist in some sense as long as there are computers to run them and people who care to try, but the way our economy has incorporated them is completely unsustainable. No business model has emerged that can support them, and at this point, I’m willing to say that there is no such business model without orders of magnitude gains in efficiency that may not ever happen with LLMs.

    • squaresinger@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 days ago

      Dotcom was a bubble too and it popped hard with huge faillout even though the internet didn’t disappear and it still was and is a revolutionary thing that changed how we live our lives.

      Overvalued doesn’t mean the thing has no value.

    • shalafi@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      6 days ago

      Sort of agreed. I disagree with the people around here acting like AI will crash and burn, never to be seen again. It’s here to stay.

      I do think this is a bubble and will pop hard. Too many players in the game, most are going to lose, but the survivors will be rich and powerful beyond imagining.

  • Xulai@mander.xyz
    link
    fedilink
    English
    arrow-up
    132
    arrow-down
    2
    ·
    7 days ago

    As someone who works with integrating AI- it’s failing badly.

    At best, it’s good for transcription- at least until it hallucinates and adds things to your medical record that don’t exist. Which it does and when the providers don’t check for errors - which few do regularly- congrats- you now have a medical record of whatever it hallucinated today.

    And they are no better than answering machines for customer service. Sure, they can answer basic questions, but so can the automated phone systems.

    They can’t consistently do anything more complex without making errors- and most people are frankly too dumb or lazy to properly verify outputs. And that’s why this bubble is so huge.

    It is going to pop, messily.

    • Laser@feddit.org
      link
      fedilink
      English
      arrow-up
      60
      ·
      7 days ago

      and most people are frankly too dumb or lazy to properly verify outputs.

      This is my main argument. I need to check the output for correctness anyways. Might as well do it in the first place then.

      • GhostTheToast@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        6 days ago

        Honestly I mostly use it as a jumping off point for my code or to help me sound more coherent when writing emails.

      • mrvictory1@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        6 days ago

        This is exactly why I love duckduckgo’s AI results built in to search. It appears when it is relevant (and yes you can nuke it from orbit so it never ever appears) and it always gives citations (2 websites) so I can go check if it is right or not. Sometimes it works wonders when regular search results are not relevant. Sometimes it fails hard. I can distinguish one from the other because I can always check the sources.

    • rhombus@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      33
      ·
      7 days ago

      And they are no better than answering machines for customer service. Sure, they can answer basic questions, but so can the automated phone systems.

      This is what drives nuts the most about it. We had so many incredibly efficient, purpose-built tools using the same technologies (machine learning and neural networks) and we threw them away in favor of wildly inefficient, general-purpose LLMs that can’t do a single thing right. All because of marketing hype convincing billionaires they won’t need to pay people anymore.

    • hansolo@lemmy.today
      link
      fedilink
      English
      arrow-up
      27
      ·
      7 days ago

      This 1 million%.

      The fact that coding is a big corner of the use cases means that the tech sector is essentially high on their own supply.

      Summarizing and aggregating data alone isn’t a substitute for the smoke and mirrors of confidence that is a consulting firm. It just makes the ones that can lean on branding able to charge more hours for the same output, and add “integrating AI” another bucket of vomit to fling.

    • OctopusNemeses@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      ·
      7 days ago

      I tried having it identify an unknown integrated circuit. It hallucinated a chip. It kept giving me non-existent datasheets and 404 links to digikey/mouser/etc.

    • vacuumflower@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 days ago

      Well, from this description it’s still usable for things too complex to just do Monte-Carlo, but with possible verification of results. May even be efficient. But that seems narrow.

      BTW, even ethical automated combat drones. I know that one word there seems out of place, but if we have an “AI” for target\trajectory\action suggestion, but something more complex\expensive for verification, ultimately with a human in charge, then it’s possible to both increase efficiency of combat machines and not increase the chances of civilian casualties and friendly fire (when somebody is at least trying to not have those).

      • pinball_wizard@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        6 days ago

        it’s possible to both increase efficiency of combat machines and not increase the chances of civilian casualties and friendly fire (when somebody is at least trying to not have those).

        But how does this work help next quarter’s profits?

        • vacuumflower@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          2
          ·
          6 days ago

          If each unplanned death not result of operator’s mistake would lead to confiscation of one month’s profit (not margin), then I’d think it would help very much.

    • frog_brawler@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      6 days ago

      If you want to define “failing” as unable to do everything correctly, then sure, I’d concur.

      However, if you want to define “failing” as replacing people in their jobs, I’d disagree. It’s doing that, even though it’s not meeting the criteria to pass the first test.

    • Dr. Moose@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      9
      ·
      edit-2
      7 days ago

      As someone who is actually an AI tool developer (I just use existing models) - it’s absolutely NOT failing.

      Lemmy is ironically incredibly tech illiterate.

      It can be working and good and still be a bubble - you know that right? A lot of AI is overvalued but to say it’s “failing badly” is absurd and really helps absolutely no one.

      • pinball_wizard@lemmy.zip
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        6 days ago

        Lemmy is ironically incredibly tech illiterate

        I disagree with all these self hosting Linux running passionate open source advocates, so they must be technology illiterate.

        • Dr. Moose@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          3
          ·
          edit-2
          6 days ago

          According to whom? No one’s running their instance here. I’m a software dev with over 20 years of foss experience and imo lemmy’s user base is somewhat illiterate bunch of contrarians when it comes to popular tech discussions.

          We’re clearly not going to agree here without objective data so unless you’re willing to provide that have a good day, bye.

  • belit_deg@lemmy.world
    link
    fedilink
    English
    arrow-up
    61
    ·
    7 days ago

    If I was China, I would be thrilled to hear that the west are building data centres for LLMs, sucking power from the grid, and using all their attention and money on AI, rather than building better universities and industry. Just sit back and enjoy, while I can get ahead in these areas.

    • disco@lemdro.id
      link
      fedilink
      English
      arrow-up
      37
      arrow-down
      1
      ·
      edit-2
      7 days ago

      They’ve been ahead for the past 2 decades. Government is robbing us blind because it only serves multinational corporations or foreign governments. It does not serve the people.

      • vacuumflower@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        18
        arrow-down
        3
        ·
        7 days ago

        They have a demographic pit in front of them which they themselves created with “1 child policy”.

        Also CCP too doesn’t exactly serve the people. It’s a hierarchy of (possibly benevolent) bureaucrats.

        • disco@lemdro.id
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          1
          ·
          7 days ago

          I never said they were ahead on social issues. They aren’t and have never been. Their infrastructure shits on ours. Hell look at their healthcare system.

          • TheGrandNagus@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            5 days ago

            The one child policy and the nightmare that will cause is not just a social policy.

            And yes, China’s infrastructure is very very impressive, however it’s also true that when everything has been built in the past 30 years, it’s inevitably going to be a lot more efficient and modern than a country that has a lot of legacy baggage. A prime example of that is probably the UK, who are still trying to keep Victorian-era rail infrastructure working. Tearing out old stuff and replacing it is time consuming, complex, and expensive.