A reflective essay exploring how classic LLM failure modes---limited context, overgeneration, poor generalization, and hallucination---are increasingly recognizable in everyday human conversation.
I know I can put together a prompt to give any of today’s leading models and am essentially guaranteed a fresh perspective on the topic of interest
I’ll never again ask a human to write a computer program shorter than about a thousand lines, since an LLM will do it better
I can agree with some of the parts about how some humans can be really annoying but this mostly reads like AI propaganda from someone who has deluded themselves into believing an LLM is actually any good at critical thought and context awareness.
I can agree with some of the parts about how some humans can be really annoying but this mostly reads like AI propaganda from someone who has deluded themselves into believing an LLM is actually any good at critical thought and context awareness.
This article was written by someone who apparently can’t switch from the “Fast” to the “Thinking” mode.