

I read that as “if you do the thinking for them, LLMs are quite good”
I read that as “if you do the thinking for them, LLMs are quite good”
it was first invented by Tigger, too!
open core isn’t open source, imo.
I haven’t got this one yet. I want one with the new picture! I feel left out
Hello from tiny Malta!
edit: we’re not even one pixel on that map :D
just because it is used for stuff, doesn’t mean it should be used for stuff. example: certain ai companies prohibit applicants from using ai when applying.
Lots of things have had tons of money poured into them only to end up worthless once the hype ended. Remember nfts? remember the metaverse? String theory has never made a testable prediction either, but a lot of physicists have wasted a ton of time on it.
we already knew what X was. There have been countless articles about pretty much only all llms spewing this stuff
the model does X.
The finetuned model also does X.
it is not news
again: hype train, fomo, bubble.
so? the original model would have spat out that bs anyway
well yeah, I tend to read things before I form an opinion about them.
ever heard of hype trains, fomo and bubbles?
well the answer is in the first sentence. They did not train a model. They fine tuned an already trained one. Why the hell is any of this surprising anyone? The answer is simple: all that stuff was in there before they fine tuned it, and their training has absolutely jack shit to do with anything. This is just someone looking to put their name on a paper
alphafold is not an LLM, so no, not really
how to secure your phone: leave it at home. Done
if you have an active and an inactive process, you’re already incomparable to an llm
not remove ic replace.
Also, stop calling releasing binary blobs of weights as open source
yes, exactly. You lose your critical thinking skills
I view it as the source code of the model is the training data. The code supplied is a bespoke compiler for it, which emits a binary blob (the weights). A compiler is written in code too, just like any other program. So what they released is the equivalent of the compiler’s source code, and the binary blob that it output when fed the training data (source code) which they did NOT release.
improved, but still bullshit