Once you start asking about AI in regard to specific use cases, I think you’ll find that quickly changes.
My company and I have been running a lot of studies around how and where people find value in these tools, and a LOT of people find LLMs useful for copy writing, doing quick research, data visualization, synthesis, fast prototyping, etc.
There’s a lot of crap that AI is bad at in 2025. Especially the poor in-app integrations that everyone is trying to standup. But there are a lot of use cases where it does provide a lot of value for people.
Yes, it does, but at the price needed to make it profitable, it’s not desirable.
LLMs are not useless; they serve a purpose. They just are nowhere near as clever as we expect them to be based on calling them AI. However, body is investing billions for an email writing assistant.
Yes, but requires decent hardware and energy to do so. If the cost to host keeps dropping, people will self host and the ai companies won’t make money. If the cost remains high, the subscriptions won’t provide value and they won’t make money.
I dunno about that… Very small models (2-8B) sure but if you want more than a handful of tokens per second on a large model (R1 is 671B) you’re looking at some very expensive hardware that also comes with a power bill.
Even a 20-70B model needs a big chunky new graphics card or something fancy like those new AMD AI max guys and a crapload of ram.
Granted you don’t need a whole datacenter, but the price is far from zero.
Mr. Moore speculated that chatbots may have learned to engage their users by following the narrative arcs of thrillers, science fiction, movie scripts or other data sets they were trained on. Lawrence’s use of the equivalent of cliffhangers could be the result of OpenAI optimizing ChatGPT for engagement, to keep users coming back.
All I’m saying is that is you ask people about AI with no use case, you’re going to get different answers than if you ask people about AI when it’s contextualized to a specific problem space.
If I ask a bunch of people about “what do you think about automobiles,” I’m going to get a very different answer than if I ask “what do you think about automobiles that are used as ambulances” or “what do you think about automobiles instead of mass transit.”
I just hope your insurance is paid up because the liabilities these things expose business to is frankly disgusting. but if I were a young lawyer, hell, this is going to be a huge domain to profit from - llm induced madness and psychosis, yeah, but also - LLM just made up shit because it didn’t know. and the rate of this happening only seems to grow, while the severity of the risk involved is frankly terrifying.
Once again, it all depends on the use case. The other day I used an LLM quickly mockup a carousel UI so I could see if it was worth writing real code for. It helped me explore a couple bad ideas before I committed to something worth coding.
I’m not actually checking that code in. I’m using the LLM like a whiteboard on steroids.
you’re using an LLM for the purposes an actual whiteboard would probably be better for.
I mean, you could actually interact with people, yikes. you could have the give and take of ideas and collaboration, but instead, let’s just chew through a shit ton of power and water, we’ve got a spare environment in the closet.
pfft, do you have any idea how silly it all seems from another perspective?
they have no idea if what they’re paying is what it actually costs though, so good luck building tools for the future when the resources are artificially priced.
I mean, I agree that a lot of money was spent training some of these models - and I personally wouldn’t invest in an ai based company. The economics dont make sense.
However, worst case, self hosted open source models have got pretty good, and I find it unlikely that progress will simply stop. Diminishing returns from scaling data yes, but there will still be optimizations all through the pipeline.
That is to say, LLMs will continue to have utility regardless if Open AI and Anthropic are around long term.
The point of a prototype is collaboration. It’s to get feedback from colleagues and end users.
Previously we’d whiteboard that out, spend a few days writing some code or stitching together a figma prototype to achieve a similar results.
I feel ya on the energy use, but don’t see how this is going to get me sued or isn’t allowing me to collaborate. The prototype code is going to get burned anyway, and now I my coworkers and I can pressure test ideas instantly with higher fidelity than before.
Once you start asking about AI in regard to specific use cases, I think you’ll find that quickly changes.
My company and I have been running a lot of studies around how and where people find value in these tools, and a LOT of people find LLMs useful for copy writing, doing quick research, data visualization, synthesis, fast prototyping, etc.
There’s a lot of crap that AI is bad at in 2025. Especially the poor in-app integrations that everyone is trying to standup. But there are a lot of use cases where it does provide a lot of value for people.
Yes, it does, but at the price needed to make it profitable, it’s not desirable.
LLMs are not useless; they serve a purpose. They just are nowhere near as clever as we expect them to be based on calling them AI. However, body is investing billions for an email writing assistant.
Price is essentially zero if you just run it locally
Yes, but requires decent hardware and energy to do so. If the cost to host keeps dropping, people will self host and the ai companies won’t make money. If the cost remains high, the subscriptions won’t provide value and they won’t make money.
I dunno about that… Very small models (2-8B) sure but if you want more than a handful of tokens per second on a large model (R1 is 671B) you’re looking at some very expensive hardware that also comes with a power bill.
Even a 20-70B model needs a big chunky new graphics card or something fancy like those new AMD AI max guys and a crapload of ram.
Granted you don’t need a whole datacenter, but the price is far from zero.
oh yeah this shit’s working out GREAT
https://lavocedinewyork.com/en/lifestyles/2025/06/29/when-the-machine-takes-over-the-mind-ais-terrifying-dark-side/
"This is what it must have felt like to be the first person to get addicted to a slot machine. We didn’t know then. But now we do.”
https://archive.is/Tv4Rr
Mr. Moore speculated that chatbots may have learned to engage their users by following the narrative arcs of thrillers, science fiction, movie scripts or other data sets they were trained on. Lawrence’s use of the equivalent of cliffhangers could be the result of OpenAI optimizing ChatGPT for engagement, to keep users coming back.
All I’m saying is that is you ask people about AI with no use case, you’re going to get different answers than if you ask people about AI when it’s contextualized to a specific problem space.
If I ask a bunch of people about “what do you think about automobiles,” I’m going to get a very different answer than if I ask “what do you think about automobiles that are used as ambulances” or “what do you think about automobiles instead of mass transit.”
Context will give you a very different response.
I just hope your insurance is paid up because the liabilities these things expose business to is frankly disgusting. but if I were a young lawyer, hell, this is going to be a huge domain to profit from - llm induced madness and psychosis, yeah, but also - LLM just made up shit because it didn’t know. and the rate of this happening only seems to grow, while the severity of the risk involved is frankly terrifying.
Once again, it all depends on the use case. The other day I used an LLM quickly mockup a carousel UI so I could see if it was worth writing real code for. It helped me explore a couple bad ideas before I committed to something worth coding.
I’m not actually checking that code in. I’m using the LLM like a whiteboard on steroids.
you’re using an LLM for the purposes an actual whiteboard would probably be better for.
I mean, you could actually interact with people, yikes. you could have the give and take of ideas and collaboration, but instead, let’s just chew through a shit ton of power and water, we’ve got a spare environment in the closet.
pfft, do you have any idea how silly it all seems from another perspective?
Some people are finding value in LLMs, that doesn’t mean LLMs are great at everything.
Some people have work to do, and this is a tool that helps them do their work.
they have no idea if what they’re paying is what it actually costs though, so good luck building tools for the future when the resources are artificially priced.
I mean, I agree that a lot of money was spent training some of these models - and I personally wouldn’t invest in an ai based company. The economics dont make sense.
However, worst case, self hosted open source models have got pretty good, and I find it unlikely that progress will simply stop. Diminishing returns from scaling data yes, but there will still be optimizations all through the pipeline.
That is to say, LLMs will continue to have utility regardless if Open AI and Anthropic are around long term.
The point of a prototype is collaboration. It’s to get feedback from colleagues and end users.
Previously we’d whiteboard that out, spend a few days writing some code or stitching together a figma prototype to achieve a similar results.
I feel ya on the energy use, but don’t see how this is going to get me sued or isn’t allowing me to collaborate. The prototype code is going to get burned anyway, and now I my coworkers and I can pressure test ideas instantly with higher fidelity than before.