You’re Using AI Without Control — And It’s Already a Governance Failure

America post Staff
9 Min Read


Opinions expressed by Entrepreneur contributors are their own.

Key Takeaways

  • Most organizations deploy AI without aligning governance, leaving critical risks misunderstood and unaddressed
  • Without clear ownership, AI decisions lack accountability, increasing exposure across legal, operational, and reputational fronts
  • AI doesn’t create new problems, it exposes existing governance gaps at unprecedented speed and scale

Back in 2013, Target made headlines globally when a cyberattack exposed the payment card information of 40 million of its customers, along with the personal data of 70 million others.

At the time, the breach was widely described as a cybersecurity failure, but it was more than that. It was also, by and large, a governance problem, one that mirrors what we’re seeing today as organizations look to scale through AI.

With no federal framework in place to guide how AI is governed in practice, organizations are defining their own guardrails to support responsible implementation and build trust. But the absence of regulation doesn’t mean the absence of risk. Organizations deploying AI today are still operating within existing legal structures that govern areas like data privacy, consumer protection, and employment practices, to name a few. If an AI-assisted decision exposes personal data or introduces a material error, the organization remains accountable.

AI governance can’t afford to wait for regulation to catch up. The Target breach and the years that followed marked a watershed period that elevated cybersecurity to a board-level risk. During that time, I was brought in to lead information security for an operator of critical internet infrastructure. Like many in that moment, I was forced to examine where governance hadn’t kept pace with operations.

As someone who’s spent her entire career in technology, I’ve come to know one constant. Technology moves, and governance rarely keeps up until it has to. Enterprise resource planning, or ERP, implementations, for example, have been widely adopted for decades and rarely fail because of the technology itself. The challenge is getting an organization to align on a single version of the truth across data, processes, and systems.

AI is that same forcing function, one generation later. Organizations that haven’t resolved those underlying issues are about to encounter them again with AI adoption, but at a much higher speed.

Here are three considerations every organization should consider before deploying AI at scale.

If your organization can’t translate risk, it can’t govern it

One of the greatest challenges in governance isn’t access to information; it’s a lack of shared understanding of its impact.

Over the course of my career, I’ve learned to translate information across legal, security and operations, and have experienced how differently each function interprets risk. A technical risk assessment may resonate clearly within a security team for example, but it doesn’t always translate effectively in a boardroom or in an operational review.

In the months following the Target breach, the risks associated with third-party vendor access weren’t broadly understood at the executive level. Making the case for investing in the right security protocols to manage that risk required translating a technical issue into business terms that leaders could evaluate and act on.

That same dynamic is playing out with AI. According to IBM’s 2025 CEO Study, 61 percent of CEOs say they aren’t fully prepared to manage the complexity they face. The challenge isn’t awareness, it’s alignment. Different parts of the organization understand different pieces of the risk, but often no one is translating how those risks connect.

Effective governance depends on that translation. When it’s missing, risks are more likely to be acknowledged than acted on, and governance becomes something the organization observes rather than something it actively practices.

AI oversight fails without named ownership

Not long ago, I served as a data protection officer, personally accountable for the organization’s data protection posture. That kind of accountability changes the questions you ask, the risks you surface, and the decisions you’re willing to stand behind.

In that role, I learned that monitoring tells you what a system is doing, but responsible oversight is the organizational ability to understand it, evaluate it, and change it when necessary. Many organizations are still trying to move AI from pilot to production. Far fewer have established clear ownership over who is accountable for how those systems behave.

According to McKinsey’s 2025 State of AI report, while most organizations are investing in AI, clear ownership and governance structures are still developing. Every organization implementing AI should be able to answer who is accountable for how each system behaves. If that answer isn’t clear, the governance structure isn’t complete.

When curiosity disappears, risk becomes invisible

Over the course of my career, I’ve led teams with a wide range of technical abilities, but what consistently sets the strongest ones apart is their level of curiosity combined with their ability to think critically.

In the context of AI, preventing flawed or biased data from influencing outcomes begins at the point of data collection, in the decisions about what to collect, what to measure and what to count. Curiosity, combined with the confidence to question those decisions when something seems off, is often what allows organizations to identify and close governance gaps before they scale into larger issues.

Having worked in highly-regulated environments, I’m acutely aware that governance frameworks provide structure, but they only work when they’re supported by behaviors that reinforce them. Human curiosity remains one of the most powerful assets a strong governance system has, and it should never be underestimated.

The lesson from 2013 wasn’t simply about a breach; it was about visibility. Target had contracts, relationships and controls in place, but its governance model hadn’t kept pace with how the business actually operated.

For those of us who’ve spent our careers in technology, this pattern is familiar. Technology rarely fails. What it reveals are the inconsistencies, assumptions, and governance gaps that were already there. The real question isn’t whether your AI works, it’s whether your organization is prepared for what it exposes.

Key Takeaways

  • Most organizations deploy AI without aligning governance, leaving critical risks misunderstood and unaddressed
  • Without clear ownership, AI decisions lack accountability, increasing exposure across legal, operational, and reputational fronts
  • AI doesn’t create new problems, it exposes existing governance gaps at unprecedented speed and scale

Back in 2013, Target made headlines globally when a cyberattack exposed the payment card information of 40 million of its customers, along with the personal data of 70 million others.

At the time, the breach was widely described as a cybersecurity failure, but it was more than that. It was also, by and large, a governance problem, one that mirrors what we’re seeing today as organizations look to scale through AI.

With no federal framework in place to guide how AI is governed in practice, organizations are defining their own guardrails to support responsible implementation and build trust. But the absence of regulation doesn’t mean the absence of risk. Organizations deploying AI today are still operating within existing legal structures that govern areas like data privacy, consumer protection, and employment practices, to name a few. If an AI-assisted decision exposes personal data or introduces a material error, the organization remains accountable.



Source link

Share This Article
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *