Everyone is like “oh look they made ChatGPT say something stupid, what a stupid article and writer”. Not, “ChatGPT will say stupid stuff as fact, what a stupid and underdeveloped tool”.
I mean, tools are tools. Their value, good or bad, is in how they’re used. If you do something like hit your own hand with a hammer, it’s really not the hammer’s fault. LLMs are 95% gizmos, with a few actually useful cases accounting for the other 5%, at least while they’re still priced way under cost anyway.
If we are using the hammer analogy, let’s assume the user isn’t an expert with hammers already and is just trying to hammer in a nail to secure something to a wall. They start hammering and the nail bends and looks terrible and isn’t secure at all, but the hammer pipes up “don’t worry that’s, exactly what it should look like. Just hammer it again even harder to make sure”.
Everyone is like “oh look they made ChatGPT say something stupid, what a stupid article and writer”. Not, “ChatGPT will say stupid stuff as fact, what a stupid and underdeveloped tool”.
They literally want us to trust their models to be the foundation of modern society…
I mean, tools are tools. Their value, good or bad, is in how they’re used. If you do something like hit your own hand with a hammer, it’s really not the hammer’s fault. LLMs are 95% gizmos, with a few actually useful cases accounting for the other 5%, at least while they’re still priced way under cost anyway.
If we are using the hammer analogy, let’s assume the user isn’t an expert with hammers already and is just trying to hammer in a nail to secure something to a wall. They start hammering and the nail bends and looks terrible and isn’t secure at all, but the hammer pipes up “don’t worry that’s, exactly what it should look like. Just hammer it again even harder to make sure”.