

The seal looks like this:

Code completion is probably a gray area.
Those models generally have much smaller context windows, so the energy concern isn’t quite as extreme.
You could also reasonably make a claim that the model is legally in the clear as far as licensing, if the training data was entirely open source (non-attribution, non-share-alike, and commercial-allowed) licensed code. (A big “if”)
All of that to say: I don’t think I would label code-completion-using anti-AI devs as hypocrites. I think the general sentiment is less “what the technology does” and more “who it does it to”. Code completion, for the most part, isn’t deskilling labor, or turning experts into chatbot-wrangling accountability sinks.
Like, I don’t think the Luddites would’ve had a problem with an artisan using a knitting frame in their own home. They were too busy fighting against factories locking children inside for 18-hour shifts, getting maimed by the machines or dying trapped in a fire. It was never the technology itself, but the social order that was imposed through the technology.







And unlike a regular Linux distro, you’ll have zero leftover systemd units or config files floating around in your FHS dirs. (You’ll have the binaries for Gnome sitting in /nix/store until you do a GC, so you can still quickly switch back if you want to.)