Important context and a good decision
4chan at least had a consistent brand of being the anti-social network and being full of Nazis, weirdos, pedophiles and people who are just anti-social for the lulz. You couldn’t ruin 4chan.
Twitter’s image was being the “internet town-square for serious thinkers” with politicians, scientists, journalists and a small but good measure of standard shitposters. Loosing that brand diminishes it’s value massively. Unfortunately neither Bluesky nor Mastodon was able to catch that clientele yet.
It’s the famous “As long as your not Google, Amazon or Apple” licence.
Typical BMW driver: Forget flashing their headlights to move people out of the way on the highway—now they’ve got a 122mm rocket launcher for that
Least impractical tuned BMW
How do I know this not the real base64enc of Mr. Bean eating pizzaAAAAAAAAAAAAAAAAAAA
For a user without much technical experience using a ready-made gui like Jan.ai with automatic model download and ability to run models with the ggml library on consumer grade hardware like mac M-series chips or cheap GPUs by either Nvidia or AMD is probably a good start.
For a little bit more technically proficient users Ollama is probably a great choice to start to host your own OpenAI-like API for local models. I mostly run gemma2 or small llama 3.1 like models with that.
Depends on what you do with it. Synthetic data seems to be really powerful if it’s human controlled and well built. Stuff like tiny stories (simple llm-generated stories that only use the complexity of a 3-year olds vocabulary) can be used to make tiny language models produce sensible English output. My favourite newer example is the base data for AlphaProof (llm-generated translations of proofs in Math-Papers to the proof-validation system LEAN) to teach an LLM the basic structure of Mathematics proofs. The validation in LEAN itself can be used to only keep high-quality (i.e. correct) proofs. Since AlphaProof is basically a reinforcement learning routine that uses an llm to generate good ideas for proof steps to reduce the size of the space of proof steps, applying it yields new correct proofs that can be used to further improve its internal training data.
Na SpaceX would just use his neuralink chip it to automate the team that keeps Musk distracted from messing with important things in the company with a simple AI
If they used “Stable Diffusion” as tool to generate training data, the resulting battle management system model would only work if the adversaries were Asian women with really big boobs.
That is something that some tech savy Lemmy users could already easily do. I repost stuff from all over the web. But some systematic preservation of good old subreddits aught to be automated.