This kind of work I find very important when talking about AI adoption.
I’ve been generating (the boring) parts of work documents via AI, and even though I put a lot of thought into my prompts and I reviewed and adjusted the output each time, I kept wondering constantly if people would notice the AI parts, and if that made me look either more efficient and ‘complete’ (we are talking about some template document where some parts seem to be designed to be repetitive), or lazy and disrespectful.
Because it’s for sure that my own trust in content and a person drops when I notice auto-generated parts, which triggers that I use AI in turn, and I ask it to summarise all that verbose AI generated content.
I’m not sure that’s how decoder-encoders are meant to work :)
This kind of work I find very important when talking about AI adoption.
I’ve been generating (the boring) parts of work documents via AI, and even though I put a lot of thought into my prompts and I reviewed and adjusted the output each time, I kept wondering constantly if people would notice the AI parts, and if that made me look either more efficient and ‘complete’ (we are talking about some template document where some parts seem to be designed to be repetitive), or lazy and disrespectful. Because it’s for sure that my own trust in content and a person drops when I notice auto-generated parts, which triggers that I use AI in turn, and I ask it to summarise all that verbose AI generated content. I’m not sure that’s how decoder-encoders are meant to work :)