When AI Mandates Meet Reality

Why mandating AI usage without structure creates more problems than it solves

6 minute read

A friend and I were chatting on Discord earlier and he mentioned that their company now mandates AI usage. Not suggests. Not encourages. Mandates. Employees must spend a specific number of hours per week using AI tools (specifically GitHub Copilot and M365 CoPilot), and it’s monitored.

My first question was “What happens if someone doesn’t have anything to optimize?”

Blank stare. No ideas. After a few moments, he replied “We’re supposed to find something.”

They’re not alone. Microsoft announced in 2025 that AI usage is “no longer optional—it’s core to every role and every level,” tying it directly to performance reviews. Meanwhile, over half of US companies now encourage or mandate AI adoption. The pressure is real, the timelines are aggressive, and the structure is often nonexistent.

When you mandate tool usage without clear purpose or governance, you don’t get innovation. You get 100 well-intentioned people running in 100 different directions, each optimizing whatever they stumbled across first.

 

Let me be clear, this isn’t a knock on using AI for daily tasks. AI can be a powerful productivity booster when applied thoughtfully. The problem arises when usage is mandated without context, leading to wasted effort and potential risks.

I use various tools throughout my day to streamline tasks. The key is knowing when and how to use it effectively, not just using it for the sake of usage.

The Uncomfortable Gap

The AI Adoption Reality

74%
Companies struggle to achieve tangible AI value
42%
Gap between expected and actual AI deployment
26%
Workers actually used AI at work in late 2024

Sources: BCG 2024, Battery Ventures 2024, St. Louis Fed 2024

BCG’s 2024 research found that 74% of companies fail to achieve tangible AI value, while Battery Ventures reported a 42% gap between what executives expected and what actually got deployed.

That gap isn’t about technology. It’s about the messy space between “use AI” and “here’s how AI fits our work.”

When failure has real consequences, you can’t just mandate new technology. You need frameworks for when to use it, how to validate outputs, and what to do when it’s wrong. The same principle applies whether you’re processing flight data or processing expense reports.

 
Mandating AI usage without clear purpose creates the illusion of progress while teams waste time optimizing the wrong things. The real cost isn’t the hours spent using AI—it’s the opportunity cost of not solving actual problems.

What Gets Optimized (And What Doesn’t)

AI delivers value today in specific contexts: data entry and document processing, code generation for prototyping, research synthesis, content drafting that requires human refinement, meeting transcription and summarization.

Areas requiring caution include strategic decision-making (context matters more than pattern matching), complex stakeholder communication, novel problem-solving where AI excels at patterns rather than paradigm shifts, and quality assessment requiring judgment beyond rule-checking.

In tax software development, AI-assisted testing works because precision requirements are clear and rules are well-defined. But strategic decisions about which features to build? That requires understanding customer pain we can’t always articulate clearly enough for AI to process.

AI accelerates work where the problem is well-defined and the solution space is understood. It struggles when we’re still figuring out what question to ask.

The Real Risk Nobody Discusses

While my friend’s company mandates AI usage; here’s what they don’t mandate: governance.

No clear policies about what can be processed through external AI services. No discussion of data residency or intellectual property exposure. No framework for when local models matter versus cloud APIs. Just a requirement to use AI and a vague notion that innovation will follow.

The Governance Gap

Legal Risk
Training data on proprietary code or customer information could expose IP and violate privacy regulations
Compliance Risk
Processing sensitive data through third-party AI services without proper controls triggers regulatory violations
Technical Debt
100 different approaches to the same problem create maintenance nightmares and integration failures

Colorado, Illinois, and New York have passed laws requiring employers to notify workers when AI makes employment decisions and audit those systems for bias. The EU AI Act creates comprehensive requirements for high-risk AI systems. Over 45 US states introduced AI-related legislation in 2024.

Yet Deloitte’s 2024 research found that 62% of leaders cite data-related challenges as their top barrier to AI adoption, particularly around access and integration. The regulatory environment is tightening while organizational readiness lags behind.

Every external integration needs abstraction layers and monitoring. When partner APIs change without warning, systems that depend directly on them fail. The same principle applies to AI services. If you’re processing customer data through Claude or CoPilot without clear policies about data handling, you’re building technical debt and legal exposure simultaneously.

 
Before mandating AI usage, ask: What problems are we solving? Which data can be processed externally? Who owns the governance? How do we measure actual value versus activity?

What Structured Adoption Looks Like

Companies that succeed with AI at scale share common patterns. They’re not mandating hours of usage—they’re creating frameworks for value.

BCG found that AI leaders follow a 10-20-70 rule: 10% of resources into algorithms, 20% into technology and data, 70% into people and processes. They pursue roughly half as many AI opportunities as less advanced peers, but they successfully scale more than twice as many products. The focus isn’t breadth—it’s depth with clear business outcomes.

Not “use AI to improve productivity” but “reduce invoice processing time from 3 days to 4 hours by automating data extraction and validation.”

Clear governance before scaling. What data can go where? Who approves new tools? How do we measure success? What’s our approach to model selection and vendor dependencies?

Centers of excellence sharing learnings and building reusable patterns. Not everyone doing their own thing simultaneously.

Standard approaches to common problems beat 100 creative solutions that can’t integrate. Build for sustainability from the start.

Measure outcomes, not activity. Hours spent using AI means nothing. Time saved, errors reduced, decisions accelerated—those matter.

Mandating Value, Not Activity

AI has tremendous potential for automation, brainstorming, research, prototyping, and testing. But potential isn’t value. Value comes from applying AI to well-defined problems with clear success criteria and proper governance.

Mandating AI usage without that structure is like mandating that everyone learn to code without explaining what software to build. You get activity. You get experimentation. You might even get some useful innovations. But you also get legal exposure from ungoverned data processing, technical debt from 100 incompatible approaches, wasted effort optimizing things that don’t need optimizing, and the illusion of progress while actual problems remain unsolved.

I’m still figuring out the right balance between enabling AI experimentation and preventing chaos. But I’m confident about this: individual AI literacy matters, but organizational AI governance matters more. 100 people running in different directions doesn’t become innovation just because they’re all using AI to get there faster.

If you’re working through similar challenges with AI adoption and governance, I’ve written about building frameworks that enable rather than constrain and when to build versus buy AI capabilities.

What’s on your team’s backlog right now that you’re building because AI makes it possible rather than because customers need it?