- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
Significance
As AI tools become increasingly prevalent in workplaces, understanding the social dynamics of AI adoption is crucial. Through four experiments with over 4,400 participants, we reveal a social penalty for AI use: Individuals who use AI tools face negative judgments about their competence and motivation from others. These judgments manifest as both anticipated and actual social penalties, creating a paradox where productivity-enhancing AI tools can simultaneously improve performance and damage one’s professional reputation. Our findings identify a potential barrier to AI adoption and highlight how social perceptions may reduce the acceptance of helpful technologies in the workplace.
Abstract
Despite the rapid proliferation of AI tools, we know little about how people who use them are perceived by others. Drawing on theories of attribution and impression management, we propose that people believe they will be evaluated negatively by others for using AI tools and that this belief is justified. We examine these predictions in four preregistered experiments (N = 4,439) and find that people who use AI at work anticipate and receive negative evaluations regarding their competence and motivation. Further, we find evidence that these social evaluations affect assessments of job candidates. Our findings reveal a dilemma for people considering adopting AI tools: Although AI can enhance productivity, its use carries social costs.
I don’t think that people who use AI tools are idiots. I think that some of my coworkers are idiots and their use of AI has just solidified that belief. They keep pasting AI results to nuanced questions and not validating the response themselves.
I’ve seen lazy developers take solutions from Stack Overflow, and paste them directly into code with no scrutiny, no testing, no validation. I’ve also seen talented developers take solutions from Stack Overflow, verify them, scrutinize them, simplify or expand on them. The difference wasn’t the source of information, but what the developer did with it.
AI is a crutch for the shameless, careless developers who create more problems than they solve. It’s just made them more efficient at it. Which only creates problems faster than than the talented developers can solve; it’s easy to destroy, but difficult to build. I know talented developers who use AI, but it hasn’t made them faster or more efficient, because their strength is also their weakness: they take their time, they evaluate their options, they scrutinize AI output because they know its prone to mistakes.
My greatest worry is the folks in the middle - they’re neither experts nor novices, just average. I want to see more engineers develop the skills needed to make them experts, but I worry that AI will just make them lazy.
This kind of work I find very important when talking about AI adoption.
I’ve been generating (the boring) parts of work documents via AI, and even though I put a lot of thought into my prompts and I reviewed and adjusted the output each time, I kept wondering constantly if people would notice the AI parts, and if that made me look either more efficient and ‘complete’ (we are talking about some template document where some parts seem to be designed to be repetitive), or lazy and disrespectful. Because it’s for sure that my own trust in content and a person drops when I notice auto-generated parts, which triggers that I use AI in turn, and I ask it to summarise all that verbose AI generated content. I’m not sure that’s how decoder-encoders are meant to work :)
This apparent tension between AI’s documented benefits
That is one hell of an assumption to make, that AI is actually a benefit at work, or even a documented one, especially compared to a professional in the same job doing the work themselves.
It’s nice for hints while programming. But that’s mostly, because search engines suck.
I think its honestly pretty undeniable that AI can be a massive help in the workplace. Not all jobs sure but using it to automate toil is incredibly useful.
That sounds like treating the symptom rather than the disease. Why automate the toil, when we could remove it instead? The other commenters brought up examples:
generating (the boring) parts of work documents
when I notice auto-generated parts, which triggers that I use AI in turn, and I ask it to summarise all that verbose AI generated content.
The AI wrote a document a human didn’t want to read, so AI then read the document AI wrote. The incentive thereafter is to save, and use, the shorter AI doc over the longer one.
Was any value created by this cycle? We just watered down the information with more automation. In the process, we probably lost nuance, detail. Alternatively, if we all agreed the document wasn’t worth a human’s eyes or keystrokes in the first place… why have the AI do anything? Sounds like we would all be happier to not have the document in the first place.
I’m specifically talking about toil when it comes to my job as a software developer. I already know I need an if statement and a for loop all wrapped in a try catch. Rather then spending a couple minutes coding that I have cursor do it for me instantly then fill out the actual code.
Or, ive written something in python and it needs to be converted to JavaScript. I can ask Claude to convert it one to one for me and test it, which comes back with either no errors or a very simple error I need to fix. It takes a minute. Instead I could have taken 15min to rewrite it myself and maybe make more mistakes that take longer.
a benefit of ai is that its faster than a human. on the other hand, its can be wrong
A rudimentary quick Internet search will provide a good bit of the “AI benefits at work” documentation for which you seek. 🤷♂️