Generalized LLMs like ChatGPT are. If you train a model on your own documentation then all it “knows” is what is in the docs and it can perform very well at finding relevant results. It’s just kind of a context-aware search engine at that point.
The problem again is that companies mostly aren’t doing that, they’re trying to replace humans with ChatGPT.
LLMs are absolute garbage for knowledge retrieval.
Generalized LLMs like ChatGPT are. If you train a model on your own documentation then all it “knows” is what is in the docs and it can perform very well at finding relevant results. It’s just kind of a context-aware search engine at that point.
The problem again is that companies mostly aren’t doing that, they’re trying to replace humans with ChatGPT.
Except that your context aware search engine would tell you when there is no result and AI will just make shit up and distort the results it did find.
It’s not true.
Vector dbs and LLMs are really powerful at knowledge retrieval.
See notebooklm and open-source alternative.