Yeah, fair enough, I was refering to posts and comments not other metadata because that isnt publicly available just as a get request (as far as I’m aware)
Everything on the Fediverse is almost certainly scraped, and will be repeatedly. You cant “protect” content that is freely available on a public website.
So if I modify an LLM to have true randomness embedded within it (e.g. using a true random number generator based on radioactive decay ) does that then have free will?
If viruses have free will when they are machines made out of rna which just inject code into other cells to make copies of themselves then the concept is meaningless (and also applies to computer programs far simpler than llms).
So where does it end? Slugs, mites, krill, bacteria, viruses? How do you draw a line that says free will this side of the line, just mechanics and random chance this side of the line?
I just dont find it a particularly useful concept.
There’s a vast gulf between automated moderation systems deleting posts and calling the cops on someone.
Look, Reddit bad, AI bad. Engaging with anything more that the most surface level reactions is hard so why bother?
At a recent conference in Qatar, he said AI could even “unlock” a system where people use “sliders” to “choose their level of tolerance” about certain topics on social media.
That combined with a level of human review for people who feel they have been unfairly auto-moderated seems entirely reasonable to me.
Ok, but then you run into why does billions of vairables create free will in a human but not a computer? Does it create free will in a pig? A slug? A bacterium?
eh, the entireity of training GPT4 and the whole world using it for a year turns out to be about 1% of the gasoline burnt just by the USA every single day. Its barely a rounding error when it comes to energy usage.
The articles point was that markdown (or other similar utf-8 text based documents) is the best guarantee you have for the files being usable into the indefinite future. As you get into the complicated formats of things like word processors the less likely that format will be meaningfully usable in 10,20,50 years time, good luck reading a obsolete word processor file from the 80s today.
That’s interesting, though their own map benefits from their definition of B (the number of boundary cuts an arbitrary line segment needs to cross), because this metric does not take into account how far away the elements on the map are from each other. E.g. the cuts going from northern to southern Africa count as much as a “distortion” as the ones separating Indonesia and South America.
Ultimately, “objective” best depends on the metric you choose and that is a subjective decision.
1 of 3
noun
la·bor ˈlā-bər
plural labors
So by going harder on blocking content that China? Because that’s what they do but most of the big providers get through after a day or two of downtime each time the government make a change to block them.
It would be more productive if you said how you think im wrong. Just saying ‘youre wrong’ doesnt really add anything to the discussion.
It produces about the same power per cubic metre as compost does, which is pretty crazy when you think about it.
Inertial confinement doesnt produce a “stable reaction” it is pulsed by it’s nature, think of it in the same way as a single cylinder internal combustion engine, periodic explosions which are harnessed to do useful work. So no the laser energy is required every single time to detonate the fuel pellet.
NIF isnt really interested in fusion for power production, it’s a weapons research facility that occasionally puts out puff pieces to make it seem like it has civilian applications.
Let me try with another example that can get round your blind AI hatred.
If people were using a calculator to calculate the value of an integral they would have significantly less diversity of results because they were all using the same tool. Less diversity of results has nothing to do with how good the tool is, it might be 100% right or 100% wrong but if everyone is using it then they will all get the same (or similar if it has a random element to it as LLMs do).
That snark doesnt help anyone.
Imagine the AI was 100% perfect and gave the correct answer every time, people using it would have a significantly reduced diversity of results as they would always be using the same tool to get the correct same answer.
People using an ai get a smaller diversity of results is neither good nor bad its just the way things are, the same way as people using the same pack of pens use a smaller variety of colours than those who are using whatever pens they have.
They in fact often have word and page limits and most journal articles I’ve been a part of have had a period at the end of cutting and trimming in order to fit into those limits.
Not the parent, but LLMs dont solve anything, they allow more work with less effort expended in some spaces. Just as horse drawn plough didnt solve any problem that couldnt be solved by people tilling the earth by hand.
As an example my partner is an academic, the first step on working on a project is often doing a literature search of existing publications. This can be a long process and even more so if you are moving outside of your typical field into something adjacent (you have to learn what excatly you are looking for). I tried setting up a local hosted LLM powered research tool that you can ask it a question and it goes away, searches arxiv for relevant papers, refines its search query based on the abstracts it got back and iterates. At the end you get summaries of what it thinks is the current SotA for the asked question along with a list of links to papers that it thinks are relevant.
Its not perfect as you’d expect but it turns a minute typing out a well thought question into hours worth of head start into getting into the research surrounding your question (and does it all without sending any data to OpenAI et al). That getting you over the initial hump of not knowing exactly where to start is where I see a lot of the value of LLMs.