Mr. Moore speculated that chatbots may have learned to engage their users by following the narrative arcs of thrillers, science fiction, movie scripts or other data sets they were trained on. Lawrence’s use of the equivalent of cliffhangers could be the result of OpenAI optimizing ChatGPT for engagement, to keep users coming back.
All I’m saying is that is you ask people about AI with no use case, you’re going to get different answers than if you ask people about AI when it’s contextualized to a specific problem space.
If I ask a bunch of people about “what do you think about automobiles,” I’m going to get a very different answer than if I ask “what do you think about automobiles that are used as ambulances” or “what do you think about automobiles instead of mass transit.”
I just hope your insurance is paid up because the liabilities these things expose business to is frankly disgusting. but if I were a young lawyer, hell, this is going to be a huge domain to profit from - llm induced madness and psychosis, yeah, but also - LLM just made up shit because it didn’t know. and the rate of this happening only seems to grow, while the severity of the risk involved is frankly terrifying.
Once again, it all depends on the use case. The other day I used an LLM quickly mockup a carousel UI so I could see if it was worth writing real code for. It helped me explore a couple bad ideas before I committed to something worth coding.
I’m not actually checking that code in. I’m using the LLM like a whiteboard on steroids.
you’re using an LLM for the purposes an actual whiteboard would probably be better for.
I mean, you could actually interact with people, yikes. you could have the give and take of ideas and collaboration, but instead, let’s just chew through a shit ton of power and water, we’ve got a spare environment in the closet.
pfft, do you have any idea how silly it all seems from another perspective?
they have no idea if what they’re paying is what it actually costs though, so good luck building tools for the future when the resources are artificially priced.
I mean, I agree that a lot of money was spent training some of these models - and I personally wouldn’t invest in an ai based company. The economics dont make sense.
However, worst case, self hosted open source models have got pretty good, and I find it unlikely that progress will simply stop. Diminishing returns from scaling data yes, but there will still be optimizations all through the pipeline.
That is to say, LLMs will continue to have utility regardless if Open AI and Anthropic are around long term.
The point of a prototype is collaboration. It’s to get feedback from colleagues and end users.
Previously we’d whiteboard that out, spend a few days writing some code or stitching together a figma prototype to achieve a similar results.
I feel ya on the energy use, but don’t see how this is going to get me sued or isn’t allowing me to collaborate. The prototype code is going to get burned anyway, and now I my coworkers and I can pressure test ideas instantly with higher fidelity than before.
oh yeah this shit’s working out GREAT
https://lavocedinewyork.com/en/lifestyles/2025/06/29/when-the-machine-takes-over-the-mind-ais-terrifying-dark-side/
"This is what it must have felt like to be the first person to get addicted to a slot machine. We didn’t know then. But now we do.”
https://archive.is/Tv4Rr
Mr. Moore speculated that chatbots may have learned to engage their users by following the narrative arcs of thrillers, science fiction, movie scripts or other data sets they were trained on. Lawrence’s use of the equivalent of cliffhangers could be the result of OpenAI optimizing ChatGPT for engagement, to keep users coming back.
All I’m saying is that is you ask people about AI with no use case, you’re going to get different answers than if you ask people about AI when it’s contextualized to a specific problem space.
If I ask a bunch of people about “what do you think about automobiles,” I’m going to get a very different answer than if I ask “what do you think about automobiles that are used as ambulances” or “what do you think about automobiles instead of mass transit.”
Context will give you a very different response.
I just hope your insurance is paid up because the liabilities these things expose business to is frankly disgusting. but if I were a young lawyer, hell, this is going to be a huge domain to profit from - llm induced madness and psychosis, yeah, but also - LLM just made up shit because it didn’t know. and the rate of this happening only seems to grow, while the severity of the risk involved is frankly terrifying.
Once again, it all depends on the use case. The other day I used an LLM quickly mockup a carousel UI so I could see if it was worth writing real code for. It helped me explore a couple bad ideas before I committed to something worth coding.
I’m not actually checking that code in. I’m using the LLM like a whiteboard on steroids.
you’re using an LLM for the purposes an actual whiteboard would probably be better for.
I mean, you could actually interact with people, yikes. you could have the give and take of ideas and collaboration, but instead, let’s just chew through a shit ton of power and water, we’ve got a spare environment in the closet.
pfft, do you have any idea how silly it all seems from another perspective?
Some people are finding value in LLMs, that doesn’t mean LLMs are great at everything.
Some people have work to do, and this is a tool that helps them do their work.
they have no idea if what they’re paying is what it actually costs though, so good luck building tools for the future when the resources are artificially priced.
I mean, I agree that a lot of money was spent training some of these models - and I personally wouldn’t invest in an ai based company. The economics dont make sense.
However, worst case, self hosted open source models have got pretty good, and I find it unlikely that progress will simply stop. Diminishing returns from scaling data yes, but there will still be optimizations all through the pipeline.
That is to say, LLMs will continue to have utility regardless if Open AI and Anthropic are around long term.
The point of a prototype is collaboration. It’s to get feedback from colleagues and end users.
Previously we’d whiteboard that out, spend a few days writing some code or stitching together a figma prototype to achieve a similar results.
I feel ya on the energy use, but don’t see how this is going to get me sued or isn’t allowing me to collaborate. The prototype code is going to get burned anyway, and now I my coworkers and I can pressure test ideas instantly with higher fidelity than before.