‘AI is just amplifying that weakness’: The dangers of having AI draft difficult conversations for you

America post Staff
8 Min Read



“No more reading emails, OK?” says tech founder and content creator Jason Yeager’s satirical boss character MyTechCeo in a recent TikTok skit.  “I want your AI reading my AI-generated email—and answering my email.”

It’s a parody, but only just. 

AI emails are proliferating across industries. In October, LinkedIn’s CEO Ryan Roslansky said he uses AI for almost every “super high-stakes” email he sends. And a recent survey from the email verification software company ZeroBounce found that one in four respondents admit to using it daily for drafting or editing their own emails. 

On Reddit, employees swap stories about bosses who use AI “to answer every email at work and thinks no one notices” or who “only communicate through AI-generated emails and it’s giving me anxiety.” When unsure, the most realistic response is to use AI too. Plug your message into a chatbot, tweak what comes out, and send it back.

But if you receive a message that was likely written by AI, especially in the midst of a disagreement, you can tell—something’s off.

It sounds a little too well drafted. The tone is reasonable and balanced. And while the problems are addressed, there’s something missing: the voice of the person you’re communicating with. (A dead giveaway, of course, is when the prompt is left in.)

Emails may sound smoother this way, but experts worry that outsourcing difficult conversations also bypasses the relationship-building that makes workplaces function. When you ask a chatbot to rewrite your message to be more “concise” or “professional,” it can also strip away the emotional substance of the exchange—an act that may be shaping the future of work for the worse, incubating a generation of professionals who can’t talk to one another.

The great social offloading

There is some reported benefit to “dry-chatting” with AI—practicing tricky topics with a bot first so you can tackle the issue directly and clearly with someone afterward. Used as rehearsal, AI can be an effective tool in building confidence. 

But when used as a substitute, it does the opposite. Filling the gap entirely, with one person’s ChatGPT effectively talking to another person’s Claude, can create distance. This runs counter to what companies say they want when bringing colleagues back into the office: creativity, collaboration, and stronger working relationships.

“When it handles the hard conversation, the human never builds the muscle of doing that,” Leena Rinne, vice president of leadership, business, and coaching at the workplace skills management platform Skillsoft, tells Fast Company. “It’s not just that the interaction risks feeling like AI—because it does—but you’re actually compromising trust with the person.”

Rinne calls this outsourcing of difficult conversations “social offloading.” It’s particularly problematic when leaders resort to it, Rinne says, because it “almost regresses their ability to have the hard conversations.”

“Now you’re less in the moment and less able to do this thing that leaders need to be able to do,” she says. It’s a problem for everyone involved: The boss isn’t developing the skill of communicating more clearly, and the employee isn’t figuring out how to effectively push back and ask for clarity. 

Carla Bevins, associate teaching professor of business management communication at Carnegie Mellon University’s Tepper School of Business, tells Fast Company she’s increasingly seeing people rely on AI-generated language in high-stakes moments.

“In some cases, both parties are doing this, which means the exchange is technically happening, but the relational work is not,” she says. From a business communication perspective, this distinction matters because difficult conversations are about so much more than just clarity or tone. 

“They are where leaders signal judgment, accountability, and intent in real time,” Bevins says. 

The temptation makes sense

The appeal is understandable. Sarah Wittman, an assistant professor of management at George Mason University’s School of Business, tells Fast Company that a lot of people have never been formally trained in how to have difficult conversations or resolve conflict constructively.

She points to social media and short-form content shrinking attention spans, along with the perfunctory exchanges that are familiar to many workplaces. At the same time, employees are busy and often anxious about getting laid off.

“We’re on the clock, messaging on Slack or Teams, or in meetings where, in the best of cases, there might be some social chit-chat,” Wittman says. “In this world, it seems logical that people are turning to a tool that can give them quick answers to solve problems that they may not know how to solve.”

For people navigating power imbalances or tense workplaces, AI can also feel like a way to protect themselves from saying the wrong thing or escalating a conflict.

Caitlin Collins, an organizational psychologist at the performance management software platform BetterWorks, tells Fast Company this signals that a workplace isn’t providing psychological safety for its workers. “AI is just amplifying that weakness,” she says.

Over time, the concern is that more and more conflict avoidance will reshape workplace culture for the worse.

Send the messy draft

Communication is especially important to learn in our early careers. Those who spent their university years, and even their first few professional years, on a laptop are in particular need of strengthening this muscle.

In organizations that are flattening and removing middle managers, leaders already have less time to dedicate to mentoring and nurturing them.

“When this layer is compressed and AI fills the gap, employees at both levels lose the chance to observe and practice,” Bevins says. 

Instead, Rinne argues, leaders should set the tone by sending the messy first draft. It’s more honest, and conveys what they really mean.

“There is an element of authenticity that shows up when I make a mistake—when I flub the conversation,” she says.

“Me going back and saying, ‘Hey, I’m really sorry’, or ‘I wish I would’ve handled that differently’, builds trust,” she adds. “It can’t be my AI apologizing for me.”





Source link

Share This Article
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *