Here’s the leadership skill AI can’t replace

America post Staff
6 Min Read



A journalist is assigned a profile of a prominent politician on a tight turnaround. With the interview just hours away, she asks ChatGPT to generate a list of questions. Satisfied with the 30 questions churned out in under a minute, she shares them with her editor to make sure no stone is unturned. The editor nearly rewrites the list entirely. It’s missing questions about pivotal early-life experiences, why the senator dropped out of college, parting ways with her first campaign manager, and more.

All of these missing questions stem from understanding the larger context and years of honing editorial judgment—the kinds of things AI can’t replace.

Just as generative AI tools like ChatGPT are truly becoming household names, with over 800 million weekly active users, per Reuters, we’re starting to understand their limitations. There’s a limit to how much gen AI can help people perform tasks outside their area of expertise—researchers call it the “AI wall.” It underscores the need for professionals to keep developing the human skills that truly matter, like good judgment and curiosity. In today’s AI-driven workplace, leaders who ask better questions unlock better decisions, stronger teams, and more meaningful use of AI. Here, three leadership practices that make the difference between using AI and using it well.

Contextualize every AI task in the bigger picture

As the journalist example illustrates, one thing that remains firmly in the realm of human intelligence is understanding the bigger picture. That means grasping not just the task at hand, but its purpose, and how it fits into broader individual or organizational goals. If an editor wants a profile to illuminate a shifting political landscape, for instance, that context should inform the tone and direction of every question.

Leaders are uniquely positioned to help teams frame questions with those larger priorities in mind, rather than chasing every available insight. This matters even more when using AI tools, which make it remarkably easy to passively execute task after task without considering the “why” of it all, resulting in AI-generated work slop

The most effective leaders pause to decide how much focus a subject or task deserves, not just how fast it can be completed, and guide their teams accordingly.

Treat outputs as jumping off points

In the early days of generative AI, prompt engineering was a critical skill. Crafting the right prompt often determined the usefulness of an LLM session. Precision was key.

As generative AI tools like ChatGPT become more sophisticated and conversational, prompt chaining is gradually replacing prompt engineering. Prompt chaining breaks a task into smaller, more manageable steps that flow logically—typically from broader questions to more refined ones. For example, if you’re using ChatGPT to develop a competitive analysis, your questions might progress as follows:

  • What is the current market landscape for [industry/product category]?
  • Who are the primary competitors in this market?
    How does each competitor position itself in terms of value proposition, target customer, pricing, and core strengths?
  • What are the key strengths and weaknesses of these competitors?

Every output guides the next prompt, requiring you to continually refine your questions. For the sake of efficiency, it’s still important to think strategically—but the pressure is no longer on getting it right the first time.

In a nutshell, the most effective leaders treat AI outputs as conversation starters, not final answers.

Develop the judgment that AI can’t replace

Despite their undeniable potential, generative AI tools don’t necessarily level the playing field for professionals. Consider this: only 26% of employees who use generative AI report improvements in their creativity, according to Gallup—not exactly the innovation boost you might expect. It’s not a problem of access to the technology, but rather, how it’s used. 

Recent research sheds light on why AI lifts performance for some people and not for others. It comes down to metacognition—the ability to plan, evaluate, and refine one’s thinking. Employees with stronger metacognitive skills stand to gain more from AI, researchers explain in Harvard Business Review. In practice, this means thinking about your thinking as you work: identifying knowledge gaps, incorporating new information into existing mental models, and adjusting your approach along the way. It’s the difference between passively skimming a story and truly comprehending it—which approach leads to learning?

To ensure leaders and employees get the most out of AI tools, it’s essential to take this more active approach. Question assumptions, explore trade-offs, and think critically alongside AI, rather than deferring to it.

At Jotform, I encourage my team never to accept an output at face value. We play devil’s advocate, look for blind spots, and consider how each output fits into the bigger picture. A solution might work in the short term, but it would be a disservice to you or your organization’s long-term goals. Even if AI tools make our lives easier, we resist the urge to settle for “good enough.” 

Practicing critical thinking enables leaders to fully leverage AI’s benefits while helping junior employees develop the judgment to overcome its limitations and vault over any AI walls.



Source link

Share This Article
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *