Here’s how to jump-start your company’s responsible AI governance in 90 days

America post Staff
14 Min Read



This month, Anthropic announced that it had built an AI model so powerful it couldn’t be released to the public. Claude Mythos had autonomously discovered thousands of critical security vulnerabilities across all major operating systems and web browsers. Anthropic chose to make the model available only to a consortium of technology companies, giving them an opportunity to patch vulnerabilities and strengthen defenses before models with similar capabilities inevitably fall into the hands of those who would exploit them.

This development shines a light on the potential future dangers that the rapid evolution of AI models brings with it. These kinds of powerful models will proliferate, and their spread will create an escalating need for governance policies rooted in the principles of responsible AI. The practice of responsible AI aims to ensure that as AI systems grow more powerful, they remain fair, explainable, and subject to human oversight—governed by ethical principles and accountable structures that protect the people those systems affect.

Responsible AI is not something businesses can set aside for the moment and hope to implement in the future. Every AI system deployed without an adequate governance framework creates reputational, legal, and operational risk right now. Those risks will only compound over time. And the dangers are not only technical. A recent survey of 750 CFOs projects roughly 500,000 AI-related job losses in 2026 alone. Responsible AI must account for the societal impact of these systems, not just the operational risks they pose to the organizations that deploy them.

{“blockType”:”mv-promo-block”,”data”:{“imageDesktopUrl”:”https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/10\/creator-faisalhoque.png”,”imageMobileUrl”:”https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/10\/faisal-hoque.png”,”eyebrow”:””,”headline”:”Ready to thrive at the intersection of business, technology, and humanity? “,”dek”:”Faisal Hoque’s books, podcast, and his companies give leaders the frameworks and platforms to align purpose, people, process, and tech—turning disruption into meaningful, lasting progress.”,”subhed”:””,”description”:””,”ctaText”:”Learn More”,”ctaUrl”:”https:\/\/faisalhoque.com”,”theme”:{“bg”:”#02263c”,”text”:”#ffffff”,”eyebrow”:”#9aa2aa”,”subhed”:”#ffffff”,”buttonBg”:”#ffffff”,”buttonHoverBg”:”#3b3f46″,”buttonText”:”#000000″},”imageDesktopId”:91420512,”imageMobileId”:91420514,”shareable”:false,”slug”:””,”wpCssClasses”:””}}

Three pillars of responsible AI

Ethical foundations. An AI use policy—a list of what people can and cannot do with AI tools—feels concrete and actionable. But a use policy sits downstream from the values that it formalizes. Before developing specific policies, the first thing you will need is clarity about what your organization stands for: the principles that will both guide policies and shape immediate decisions when technological advances blow past current guidelines.

Accountability and oversight. Responsible AI fails when nobody owns it. You need clear answers to key governance questions: Who can approve an AI deployment? Who can halt one? And who is accountable to the board when something goes wrong? Organizational accountability is a vital starting point but it is not enough on its own. You’ll also need frontline safeguards that keep humans meaningfully in the decision-making loop, especially when it comes to matters of safety and enduring consequences.

Human impact. Every AI deployment affects real people—people whose work changes, who lose their jobs, whose options are shaped by algorithmic decisions, and whose opportunities expand or contract in accordance with the scope of the new models. A responsible AI approach means being thoughtful and deliberate about the human effects of deployment, and actively designing for fairness, dignity, and human augmentation rather than replacement.

The 90-day plan that follows is built on these three pillars.

Days 1-30: Map

The temptation with any governance initiative is to start building immediately. Resist that impulse. The first 30 days of this plan focus on mapping your AI landscape. In most organizations, the AI footprint is significantly larger, more fragmented, and less governed than leadership believes.

1. Map your AI landscape. Inventory every AI system used by the organization or that touches the organization in a significant way, including through “shadow use” of unsanctioned AI systems by employees. In most cases, the number will be significantly higher than leadership initially expects. For each use case, document what the AI does, what data it uses, who it affects, and who is responsible for its governance.

2. Force the worst-case conversations. For every AI use case you identify, ask your leadership team: What’s the worst-case scenario here? This approach is based on the catastrophize step of the CARE framework for AI risk management; the worst-case scenario is deliberately named to provoke the right mindset. The disciplined practice of imagining catastrophic failure aims to surface risks that would otherwise go unnoticed.

3. Triage. In some cases, the risks you uncover won’t be able to wait for you to develop a polished governance infrastructure. If the mapping and catastrophizing processes reveal that an AI system is making consequential decisions with no oversight, no explainability, and no clear owner—escalate the problem immediately. Pause the use of the system or place it under close human review. You don’t need a complete governance framework to act on an obvious risk.

4. Diagnose your culture. None of the governance structures you are about to build will work if your organizational culture isn’t actively engaged with them. You need to answer one fundamental question: Does your organization treat responsible AI as a business priority or as a compliance box to be checked? If the answer is the latter, a comprehensive culture change initiative will be required.   

5. Map your decision rights. You need clear answers to four questions:

a. Who can approve a new AI deployment?

b. Who decides when a system requires governance review?

c. Who can halt a deployment?

d. Who can reallocate resources to address a newly identified risk?

If the answers are ambiguous, your governance framework will have no teeth—decisions will default to whoever speaks the loudest or moves fastest. In this situation, responsible AI will lose every time.

Days 31-60: Build

In the second phase, the plan’s focus shifts to building the governance infrastructure that will sustain responsible AI over the long term.

1. Develop your ethical framework. Your ethical framework is the set of foundational principles that will guide every AI decision your organization makes, including the ones the policy hasn’t anticipated yet. It should address your commitments around fairness and nondiscrimination, your position on human oversight and the circumstances under which autonomous AI decision-making is and is not acceptable, your approach to employee impact and workforce augmentation, and your stance on the broader societal effects of AI.

2. Begin building the technical architecture. Governance policies without technical infrastructure are just words. Start putting in place the monitoring and data collection processes that your ethical framework needs to become an operational reality: the ability to track what your AI systems are doing, to detect drift and bias, and to produce the evidence your governance reviews will rely on. This work will not be complete by day 60, but the foundations need to be laid.

3. Establish ownership and structure. If responsible AI is a side responsibility bolted onto someone’s existing role, it will always lose out to the part of their job that is used to assess their success. Someone needs to own responsible AI and governance as an intrinsic part of their actual job. Your organization needs a dedicated person or team with both an enterprise-wide view and the authority to enforce the relevant policies. You’ll also need people in each business unit with the responsibility and authority necessary to turn principles into practical governance on the ground.

4. Design your assessment process. Build a structured, repeatable process for evaluating AI systems against your ethical framework. The assessment should produce a clear risk profile for each system, with defined thresholds that trigger different levels of governance review. Not every AI system needs board-level oversight, but you need a mechanism for determining which ones do, and that mechanism needs to be consistent, documented, and enforceable.

5. Realign incentives. People do what they’re rewarded for. If every incentive in your organization points to the importance of speed and cost reduction above all else, responsible AI will be treated as a source of friction—something to route around rather than a necessary part of the work. Tie a portion of leadership evaluation to responsible AI metrics: risk incidents identified and addressed, governance reviews completed, willingness to halt or modify deployments that don’t meet standards.

6. Begin reviews on your highest-risk systems. As soon as you have your ethical framework and assessment process in workable shape, run your first reviews on the systems that your risk inventory identified as the most exposed. You get two things out of this: real findings about your most urgent risks and an early read on whether the governance infrastructure actually works under pressure.

7. Build your skill development plan. Responsible AI requires capabilities most organizations do not yet have. Your leadership needs to understand AI risk well enough to govern it. Your technical teams need bias detection and human-centered design skills. Your frontline managers need to understand how AI is changing the work their teams do. Your legal and compliance teams need to understand the rapidly evolving regulatory landscape. Design a targeted development program that addresses the most critical gaps and then build its implementation into the governance cadence.

Days 61-90: Embed

In the last 30-day stretch, the focus shifts to ensuring the system survives contact with the day-to-day pressures of running an organization.

1. Build exit plans. Every AI system in your portfolio should have a defined exit pathway, documented and owned, that shows how to safely shut it down. These are the exit protocols of the CARE framework, and they must to be put in place before you need them. The time to design a shutdown procedure is not in the middle of a crisis.

2. Establish the governance rhythm. Set up a regular meeting with an outline agenda for monitoring and responding to responsible AI issues. This creates a protected space on the calendar for reviewing the risk landscape, surfacing emerging issues, and assessing the health of your governance processes.

3. Embed governance into operations. Responsible AI cannot live as a separate process that runs alongside normal operations—it needs to be woven into them. Every new AI system above a defined risk threshold requires a governance review before deployment. Every existing system requires periodic reassessment. No exceptions. This is where responsible AI stops being a project and starts becoming part of how you operate.

4. Iterate. By day 90, you have live data—use it. Where are the bottlenecks? What’s working well and what isn’t? Is the culture shifting or is it stuck in place? The aim here is to learn from everything you’ve done so far and use these learnings to iterate the next version of your governance engine.

Conclusion

Claude Mythos is not an anomaly. It’s a preview of the kind of dangerous capabilities AI models will bring with them in the future. The question is not whether your organization will be affected by AI systems of this power. It will. Rather, the question is whether you will have the governance infrastructure in place when they arrive. Any organization can take significant steps toward putting this infrastructure in place in a single quarter. There’s no excuse for not starting today.

{“blockType”:”mv-promo-block”,”data”:{“imageDesktopUrl”:”https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/10\/creator-faisalhoque.png”,”imageMobileUrl”:”https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/10\/faisal-hoque.png”,”eyebrow”:””,”headline”:”Ready to thrive at the intersection of business, technology, and humanity? “,”dek”:”Faisal Hoque’s books, podcast, and his companies give leaders the frameworks and platforms to align purpose, people, process, and tech—turning disruption into meaningful, lasting progress.”,”subhed”:””,”description”:””,”ctaText”:”Learn More”,”ctaUrl”:”https:\/\/faisalhoque.com”,”theme”:{“bg”:”#02263c”,”text”:”#ffffff”,”eyebrow”:”#9aa2aa”,”subhed”:”#ffffff”,”buttonBg”:”#ffffff”,”buttonHoverBg”:”#3b3f46″,”buttonText”:”#000000″},”imageDesktopId”:91420512,”imageMobileId”:91420514,”shareable”:false,”slug”:””,”wpCssClasses”:””}}



Source link

Share This Article
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *