Skip to main content

Deploying AI Governance at Scale: Lessons from 700+ Franchise Locations

June 1, 2025 · 10 min read

When I joined Right at Home as AI Product & Solutions Lead, the organization had no AI strategy, no governance framework, and no standardized approach to evaluating or deploying AI tools. What it did have was 700+ franchise locations, a healthcare-adjacent operating environment with strict compliance requirements, and a growing wave of franchisees already experimenting with ChatGPT on their own.

The gap between "people are using AI" and "people are using AI safely and effectively" was the entire job. Here's how I built a governance framework from zero — and what I learned about scaling AI literacy across a franchise network.

Starting with Strategy, Not Tools

The first instinct in any AI initiative is to start building. Pick a use case, spin up an API key, ship a prototype. I've done that — and it works for startups. But in a franchise network serving vulnerable populations, starting with tools before strategy is how you end up with a compliance incident and a board-level conversation you don't want to have.

I began by authoring the organization's inaugural AI strategy document. This wasn't a deck with buzzwords — it was a structured narrative memo covering vision, principles, a phased roadmap, and an explicit execution model that separated corporate-track initiatives from franchise-track enablement. The strategy went through multiple iterations with executive stakeholders before becoming the operating blueprint.

The key insight: strategy documents aren't artifacts to file away. They're alignment tools. Every decision I made over the following months — which tools to vet, which products to build, which training to prioritize — traced back to the principles established in that document. When stakeholders questioned priorities, the strategy was the reference point, not my opinion.

Establishing the AI Governance Council

Governance without authority is theater. I established an AI Governance Council — a cross-functional body with representatives from IT, legal, compliance, operations, and franchise support. The council's mandate was simple: no AI tool touches production data or franchisee workflows without council review.

This wasn't bureaucracy for its own sake. In healthcare-adjacent environments, the risk surface for AI is genuinely different. A hallucinating chatbot in e-commerce means a bad product recommendation. A hallucinating chatbot advising on care scheduling or employee management could create liability exposure across hundreds of locations.

The council met on a regular cadence, reviewed submissions from the AI opportunity pipeline, and made go/no-go decisions based on a standardized scoring framework. This created a forcing function: teams couldn't just adopt tools ad hoc, but they also had a clear, predictable path to getting tools approved.

The Tool Vetting SOP

The centerpiece of the governance framework was a standardized tool vetting Standard Operating Procedure. Every AI tool — whether a vendor product, an internal prototype, or a franchisee request — went through the same evaluation process.

The SOP covered five dimensions:

Security Posture: Where does the data go? Is it encrypted in transit and at rest? Does the vendor have SOC 2 Type II? What's their incident response process? For healthcare-adjacent use cases, these aren't nice-to-haves — they're table stakes.

Business Associate Agreements (BAAs): Any tool that might touch protected health information or sensitive employee data required a BAA review. This eliminated a surprising number of "just use this AI tool" requests early in the funnel — many vendors either couldn't provide a BAA or their terms were incompatible with our compliance posture.

Data Flow Mapping: We required a complete data flow diagram for every tool showing exactly what data enters the system, where it's processed, whether it's used for model training, and how it's retained or deleted. This was the single most effective filter. Vendors who couldn't articulate their data flow clearly were typically vendors we didn't want processing our data.

Integration Assessment: How does the tool connect to existing systems? Does it require API access to production databases? Can it operate in a sandboxed environment? Integration complexity was a proxy for risk surface — the more deeply a tool needed to integrate, the more scrutiny it received.

Cost Analysis: Total cost of ownership including licensing, integration labor, ongoing maintenance, and the opportunity cost of the team time required to support it. I brought my finance background to bear here — every tool needed a simple ROI model before it could advance past evaluation.

Vendor Risk in Healthcare-Adjacent AI

Working in a healthcare-adjacent environment taught me that vendor risk assessment for AI tools is fundamentally different from traditional SaaS procurement. Three patterns emerged that shaped our approach:

Training data opt-out is non-negotiable. Many AI vendors default to using customer data for model improvement. In our environment, this was an automatic disqualifier unless the vendor provided contractual guarantees of data isolation. We developed specific contract language that went beyond standard terms of service.

Model versioning matters for compliance. When a vendor updates their underlying model, the behavior of your deployed tool changes. We required vendors to provide advance notice of model changes and maintain the ability to pin to specific model versions during compliance review periods.

Explainability requirements scale with risk. For low-risk use cases like content drafting, we accepted black-box models. For anything touching scheduling, resource allocation, or decision support, we required the vendor to demonstrate how outputs were generated and what guardrails prevented harmful recommendations.

Three Enablement Programs

Governance without enablement creates a culture of "AI is banned." That's the opposite of what we wanted. The goal was to make franchisees and corporate staff competent and confident AI users — within the guardrails we'd established.

I designed and led three parallel enablement programs:

AI Bytes: Short-Form Training Modules

Bite-sized training content — 5 to 10 minutes each — covering specific AI concepts and approved tool workflows. These were designed for franchise owners and operators who didn't have time for deep technical training but needed to understand what AI could and couldn't do. Topics ranged from "What is a Large Language Model?" to "How to Write Effective Prompts for Care Plan Summaries."

The key design principle was immediacy of application. Every module ended with a specific action the learner could take that day with an approved tool. Theory without practice doesn't stick, especially in an operational environment where people are managing caregivers, clients, and compliance simultaneously.

Power Hours: Deep-Dive Workshops

Monthly interactive sessions that went deeper on specific AI topics. These were structured as workshops, not lectures — attendees worked through real scenarios with real tools. Topics included prompt engineering for operational use cases, understanding AI output quality, and recognizing when AI-generated content needs human review.

Power Hours served a dual purpose: education and feedback collection. Every session surfaced new use cases, new concerns, and new questions that fed back into the governance framework and the AI opportunity pipeline.

AI Champions Cohort

The most intensive program: a cross-functional cohort of high-potential team members who received advanced AI training and became the distributed AI expertise layer across the organization. Champions were embedded in different departments and franchise regions, serving as the first point of contact for AI questions and the primary conduit for surfacing new AI opportunities.

This cohort model solved a scaling problem. I couldn't personally support 700+ franchise locations. But I could train 20 Champions who collectively covered the organization's footprint and could translate between corporate AI strategy and local operational reality.

The AI Opportunity Pipeline

One of the most impactful artifacts wasn't a product — it was a process. I created a standardized AI opportunity pipeline with structured intake, scoring, lifecycle tracking, and executive-ready prioritization materials.

Every AI idea — whether from a franchise owner, a corporate team member, or the governance council itself — entered the pipeline as a structured submission. Each opportunity was scored across dimensions including business impact, technical feasibility, compliance risk, and alignment with strategic priorities.

The pipeline served three purposes:

  1. Demand management: It gave everyone a clear place to submit ideas, which reduced ad hoc requests and "can we just try this tool?" conversations.
  2. Portfolio governance: Leadership could see the full landscape of AI initiatives at any time — what was in evaluation, what was approved, what was in development, what was deployed.
  3. Prioritization transparency: When someone asked "why aren't we doing X?", the pipeline provided a data-driven answer rooted in the scoring framework, not politics.

Architecture Decisions: RAG on Azure

For the products we built internally, I produced enterprise AI architecture blueprints using Retrieval-Augmented Generation patterns on Azure. The architecture decisions were driven by three constraints:

Data residency: All data had to remain within our Azure tenant. This eliminated most third-party vector database options and led us to implement controlled indexing within Azure AI Search.

Grounding over generation: In a healthcare-adjacent environment, we couldn't afford hallucination. Every AI-generated response had to be grounded in verified source documents. We implemented strict retrieval boundaries — the model could only reference documents that had been explicitly indexed and approved through the governance process.

Auditability: Every query, every retrieval, and every generated response was logged. This wasn't just for compliance — it was the foundation for continuous quality improvement. We could identify patterns in queries that returned poor results and iteratively improve the knowledge base.

The first product — a self-service data analytics platform using natural language to SQL — went from pilot to corporate rollout. The second — a corporate knowledge retrieval agent deployed via Microsoft 365 Copilot — gave staff instant access to policy documents, procedures, and operational guidelines without digging through SharePoint.

Lessons Learned

Governance is a product, not a policy. Treat your governance framework like a product with users (the people submitting tools for review), a backlog (process improvements), and metrics (time to decision, approval rates, incident rates). If governance feels like a blocker instead of a service, you've built it wrong.

Start with the franchise owner's reality. Every governance decision and enablement program was designed around the constraint that franchise owners are running businesses, not studying AI. If a policy required more than two steps to comply with, it wouldn't be followed. If training took more than 10 minutes, it wouldn't be completed. Design for the user, not the ideal.

The strategy document is alive. I iterated the AI strategy through multiple versions as we learned what worked and what didn't. The version that existed six months in looked materially different from the launch version — not because the vision changed, but because the execution model adapted to reality.

Vendor risk is ongoing, not one-time. AI vendors change their models, their terms of service, and their data practices regularly. We built vendor monitoring into the governance cadence — not just initial assessment, but ongoing review of approved tools.

Outcomes

Over the course of my tenure, the governance framework enabled:

  • Two AI products shipped to production — NL-to-SQL analytics platform and M365 Copilot knowledge agent
  • A complete governance framework — strategy document, governance council, tool vetting SOP, vendor risk process, and opportunity pipeline
  • Three enablement programs running concurrently — AI Bytes, Power Hours, and AI Champions — scaling AI literacy across 700+ franchise locations
  • Zero compliance incidents from AI tool adoption — every deployed tool passed through the full vetting process

The biggest lesson: in enterprise AI, the governance framework is the product. The individual AI tools come and go. The models improve. The vendors consolidate. But the organizational capability to evaluate, adopt, and govern AI safely — that's the durable competitive advantage. Building that capability from scratch at a 700+ location franchise network was the hardest and most rewarding product work I've done.