Shift governance left or slow AI down


Table of contents
Subscribe via Email
Subscribe to our blog to get insights sent directly to your inbox.
AI pilots move fast until governance shows up. Teams want to accelerate AI adoption, but governance gaps often surface just as systems move toward production. In new research from Modus Create, 550 product and technology leaders share how AI is reshaping product development as it moves from experimentation to deeper integration. The findings show that data-privacy compliance remains one of the leading blockers to AI integration. Read the full report here.
AI initiatives across industries tend to follow two patterns.
First, they have strong executive urgency. AI maturity has become the ultimate innovation flex. If you aren’t doing something with AI, you’re perceived as falling behind.
Second, they stall.
In our recent research, we found that 76% of organizations had their AI deployments delayed by regulatory or ethical considerations.
It’s not that organizations don’t understand the importance of governance. Regulatory concerns often surface because governance enters the product lifecycle too late. Compliance reviews, data-privacy questions, and explainability requirements tend to arise when AI projects are already close to production.
Security teams faced a similar challenge years ago. Instead of treating security as a final checkpoint, they began embedding it directly into development through DevSecOps. In other words, they shifted security left. AI governance needs the same approach.
What it means to shift governance left
Shifting governance left means embedding oversight into product development from the very beginning rather than reviewing solutions when they are nearly ready to launch.
In many organizations, governance still enters the conversation late. Teams select foundation models, fine-tune prompts, connect internal data sources, and move toward production. Only then do compliance and legal teams review the system. Naturally, this raises questions around training data, consent, explainability, and bias. Each question introduces friction, and each point of friction introduces delay. This helps explain why ensuring compliance with data privacy regulations has emerged as the biggest obstacle organizations face when integrating AI into product development.

In practice, shifting left starts at the proof-of-concept (PoC) stage. While working on the PoC, ask these five questions:
1. Which data platforms is your company using?
The choice of data infrastructure impacts data accessibility, processing capabilities, and scalability for AI workloads. By understanding the rationale behind the architecture, you can be more confident in its ability to support AI needs.
2. What access controls have you implemented?
Granular access controls are important so that only authorized personnel can access specific data, safeguarding privacy and preventing misuse.
3. Is your data stored securely?
Protecting sensitive data is important, especially in AI models that could inadvertently expose patterns or insights.
4. Is your data infrastructure compliant with relevant regulatory frameworks?
Compliance with regulations is non-negotiable when dealing with sensitive data and helps to guarantee that AI development and deployment adhere to legal and ethical standards.
5. Do you have clear owners of datasets and data pipelines?
Defined ownership helps to make sure a specific team or team member is responsible for your data's quality, maintenance, and accessibility.
When these questions are answered during the PoC, governance becomes part of system design. That early clarity prevents many of the delays that surface later during compliance and regulatory reviews.
“AI creates new opportunities to trigger actions at speed and scale, which means oversight matters more than ever. Before you deploy, be clear about data access, human decision review, exception handling, and how you’ll evaluate and monitor the quality of AI decisions over time.” — Greg Strendale, VP, Product Engineering Services, Modus Create
The 3 layers of governance
There are three key layers of AI governance, and each addresses a different kind of risk. While different teams may own different parts of this work, anyone involved in building or deploying AI systems needs to understand how all three layers shape the system.
First layer: Compliance
The first layer focuses on legal obligations. It determines what data can be used, how it can be stored, and under what conditions it can be processed. Privacy laws, consent requirements, data residency rules, and intellectual property restrictions all fall into this category.
These constraints shape AI systems from the beginning, which is why compliance questions should surface right at the start of AI initiatives.
Second layer: Regulation
The second layer concerns how you manage and control AI systems once they exist. Frameworks such as the EU AI Act, the NIST AI Risk Management Framework, and emerging ISO standards require organizations to document how models work, classify risk levels, and maintain accountability.
Third layer: Ethics
The final layer addresses risks that extend beyond legal requirements. A system may comply with privacy law and satisfy regulatory frameworks while still producing biased outputs, opaque decisions, or outcomes that undermine trust. Ethical governance focuses on issues such as fairness, explainability, responsible use, and human oversight. These concerns shape how AI systems interact with users and how decisions are presented or reviewed.
Organizations that scale AI successfully do not treat these layers separately. Compliance, regulation, and ethics operate together as part of how AI systems are designed and governed.
The hardest regulatory hurdle
61% of product leaders agree that data privacy mandates represent the most difficult regulatory hurdle in AI deployment. In other words, governance tends to break down right at the foundation.

The difficulty stems from how AI systems handle data. Models pull information from multiple internal sources such as product logs, customer records, and support transcripts, while also interacting with external models and APIs. Once connected, sensitive information can surface in prompts, logs, or model outputs.
This is why data access decisions need to happen early. Teams must define which systems AI can query, prevent sensitive information from entering prompts, and understand how external models handle the inputs they receive.
Governance maturity determines AI velocity
Product teams are used to moving quickly—launching MVPs, iterating based on feedback, and improving products through rapid release cycles.
That approach works well for most traditional features, but AI changes the equation. If teams launch AI capabilities without addressing data quality and governance, they often run into compliance, data-privacy, and explainability questions that slow deployment before the product has a chance to scale. In practice, governance maturity determines AI velocity. It shapes how quickly experiments become production-ready features that organizations can confidently deploy at scale.
This blog features findings from our latest report, AI in product development: A reality check, a comprehensive study of how 550 product and technology leaders are actually deploying AI in their organizations. Access the full report here.

Modus Create is a digital product engineering partner for forward-thinking businesses. Our global teams work side-by-side with clients to design, build, and scale custom solutions that achieve real results and lasting change.
Related Posts
Discover more insights from our blog.



