I don’t think it’s irresponsible to suggest to readers that they can use an AI chatbot to examine any given image to see if it was AI-generated. Even the lowest-performing multi-model chatbots (e.g. Grok and ChatGPT) can do that pretty effectively.
Also: Why stop at one? Try a whole bunch! Especially if you’re a reporter working for the BBC!
It’s not like they give an answer, “yes: Definitely fake” or “no: Definitely real.” They will analyze the image and give you some information about it such as tell-tale signs that an image could have been faked.
But why speculate? Try it right fucking now: Ask ChatGPT or Gemini (the current king at such things BTW… For the next month at least hahaha) if any given image is fake. It only takes a minute or two to test it out with a bunch of images!
Then come back and tell us that’s irresponsible with some screenshots demonstrating why.
I don’t need to do that. And what’s more, it wouldn’t be any kind of proof because I can bias the results just be how I phrase the query. I’ve been using AI for 6 years and use it on a near-daily basis. I’m very familiar with what it can do and what it can’t.
Between bias and randomness, you will have images that are evaluated as both fake and real at different times to different people. What use is that?
I don’t think it’s irresponsible to suggest to readers that they can use an AI chatbot to examine any given image to see if it was AI-generated. Even the lowest-performing multi-model chatbots (e.g. Grok and ChatGPT) can do that pretty effectively.
Also: Why stop at one? Try a whole bunch! Especially if you’re a reporter working for the BBC!
It’s not like they give an answer, “yes: Definitely fake” or “no: Definitely real.” They will analyze the image and give you some information about it such as tell-tale signs that an image could have been faked.
But why speculate? Try it right fucking now: Ask ChatGPT or Gemini (the current king at such things BTW… For the next month at least hahaha) if any given image is fake. It only takes a minute or two to test it out with a bunch of images!
Then come back and tell us that’s irresponsible with some screenshots demonstrating why.
I don’t need to do that. And what’s more, it wouldn’t be any kind of proof because I can bias the results just be how I phrase the query. I’ve been using AI for 6 years and use it on a near-daily basis. I’m very familiar with what it can do and what it can’t.
Between bias and randomness, you will have images that are evaluated as both fake and real at different times to different people. What use is that?