

41·
1 day ago… a flow that can reasonably check itself for errors/hallucinations? There’s no fundamental reason why it couldn’t.
Turing Completeness maybe?
… a flow that can reasonably check itself for errors/hallucinations? There’s no fundamental reason why it couldn’t.
Turing Completeness maybe?
Useless for us, but not for them. They want us to use them like personalised confidante-bots so they can harvest our most intimate data
Absolutely! He simply has a very original take on “freedom”, but we all know that’s a tricky word to pin down, so don’t think about it too much, and leave it to the big dogs to tell you when your freedom is being protected.
Right, and that goes for the things it gets “correct” as well, right? I think “bullshitting” can give the wrong idea that LLMs are somehow aware of when they don’t know something and can choose to turn on some sort of “bullshitting mode”, when it’s really all just statistical guesswork (plus some preprogrammed algorithms, probably).