The missing middle: Why your AI strategy is failing
In most enterprises, AI investment follows two distinct paths. Leadership funds big, governed projects such as the customer service bot, or the contract review assistant, together with proper evaluation sets and change management. Meanwhile, employees get ChatGPT or Copilot access to help with writing and questions. Both are useful.
But they represent two extremes, and between them sits the biggest opportunity: empowering people across the organisation to build AI agents that connect to real systems and automate real work, without turning every idea into a six-month central IT project.
That's what's missing.
The shift to action-taking AI
What makes this urgent is that AI assistants are rapidly moving beyond simply reading information. ChatGPT recently added support for custom MCP connectors, enabling agents to automate multi-step workflows across systems. Microsoft is turning Copilot Studio into a platform for building agent flows.
This isn't just better search anymore. These tools are starting to act on your behalf – creating tickets, updating records, triggering workflows. It's easy to imagine someone saying: "At 9am every weekday, read my inbox. For emails from engineering, create a JIRA ticket. For anything mentioning a client, add a note to our CRM and send me a summary". And here's the crucial shift: that person can say exactly that – in plain language – and the AI can actually do it. No Python required. No ticketing system for IT.
The capabilities are arriving fast. But is your organisation ready to capture the value safely?
The two extremes
The centralised approach means leadership identifies strategic use cases: customer service bot, contract review assistant, compliance checker. These get proper funding, evaluation sets, risk reviews, change management. The work is important and carefully executed. But it's slow, selective, and misses most of the value hiding in the organisation.
The lightweight enablement approach means rolling out ChatGPT or Copilot access so people can paste content, ask questions, get help drafting. It's simple, safe, useful. But the investment stops there – no connectors to internal systems, no support for building agents that actually do things. Most organisations haven't invested in anything between these extremes.
There's no path for teams to build system-connected agents without requiring them to become full-scale enterprise projects.
The missing middle
This third path means deliberate effort to make it easy for individuals and teams to build agents that talk to real systems, do this safely and observably, and avoid the false choice between a central IT project or personal toy.
The highest-value ideas often live with people closest to the work: the sales ops person who knows which spreadsheets break, the onboarding coordinator who hates manual copying between systems, the finance manager tidying critical exceptions.
These people have specific, high-ROI ideas for automations. They're not waiting for the CIO to discover them in a whiteboard session.
If you only focus on centrally selected agents, you miss most of that value. If you invest in the middle – making systems legible to AI assistants, providing safe tools, and supporting teams in deploying them to design their own solutions with AI – you turn the whole organisation into a distributed R&D lab for AI workflows.
Building the foundation
What does 'making systems legible to AI assistants' actually mean? It means building a tool fabric: the layer connecting your data and applications to assistants and agents. This is where MCP servers come in. MCP (Model Context Protocol) is an open standard that lets you wrap your systems – CRM, ticketing, ERP, databases, legacy apps – and expose them as clear, task-level actions: 'Create support ticket', 'Get customer balance', 'Submit time off request'.
The valuable thing about MCP servers is they work everywhere. Once you've built an MCP server for your ticketing system, it's available to general assistants like ChatGPT, Copilot, and Claude. But the same server is also available when you're building more formal agents in platforms like Copilot Studio or agent builders. You build the integration once, and it becomes reusable infrastructure.
For older systems that aren't going anywhere, this offers a clean path forward without replacing them. You're putting a modern interface in front of legacy tools that any assistant or agent can use. For this fabric to be useful, you need reasonably clean data, consistent concepts across tools, understandable errors, and a simple directory showing which tools exist and what they do.
Guardrails that don't kill experimentation
Once assistants can act in real systems, security and governance become critical. The challenge is implementing them without shutting down experimentation.
Start generous in low-risk environments: sandboxes, read-only tools, test data. Define simple, clear rules: personal agents may only touch these systems with these actions. Actions that delete data, move money, or send external messages always require human confirmation. Prefer visibility over micro control. Give teams the ability to see what agents did rather than pre-approving every action. When an individual automation proves useful, there should be a lightweight path to attach test cases and move it to production systems.
Practical moves to get ready
1. Choose your main assistants - Focus on which general assistants (ChatGPT, Copilot, Claude) you'll support first so your tooling work isn't spread thin.
2. Identify high-value systems - Pick three to five systems where better automation would help: CRM, ticketing, finance, internal knowledge.
3. Start building MCP servers - Wrap your key systems with MCP servers that expose clear, task-level actions your teams actually need.
4. Design a simple permission model - Decide which tools are read-only versus write, which actions always require human confirmation.
5. Run a 'team agent pilot' - Invite teams hungry for change. Give them access to new tools through AI assistants. Pair them with an AI coach (not a developer) who can help them articulate clear instructions and spot patterns worth scaling. In doing so, you're empowering those employees to create their own AI solutions.
6. Create a lightweight internal catalogue - List your tools with human-friendly descriptions and example prompts.
7. Define how experiments become shared - When something adds value, there should be a known path to add tests, assign an owner, and move it into a shared catalogue. This isn't a multi-year transformation. It's small, concrete steps that together build the fabric your future agents will rely on.
You can't predict where the value will come from. But you can build the conditions for people to find it.