Most enterprise AI strategies die in one of two places: the boardroom deck that never becomes a roadmap, or the pilot project that never becomes a product. The first failure mode is all vision with no execution. The second is all execution with no vision — shipping tools that don't connect to organizational goals, can't survive a compliance review, and don't scale past the initial team that built them.
I built Right at Home's inaugural AI strategy from scratch, and I want to share what I actually learned — not the framing that sounds good in retrospect, but the sequencing decisions that made the difference between getting products shipped and getting stuck.
Phase 1: Executive Alignment Before Everything
The most common mistake technology leaders make when launching an AI initiative is starting with the technology. You pick a model, build a demo, and then try to get buy-in. The demo impresses people for twenty minutes, and then the questions start: What's the data governance story? Who owns this? What happens when it's wrong?
I reversed that sequence. Before writing a line of code, I spent weeks developing the AI strategy document itself — and critically, I wrote it as a living artifact, not a one-time deliverable. The strategy went through multiple versions as we learned more about the organization's risk tolerance, regulatory environment, and actual user needs. Each version reflected new information: a conversation with legal, feedback from franchise owners, a shift in what leadership was willing to fund.
The framing that unlocked executive buy-in wasn't "AI is the future." It was operational leverage. Right at Home operates across 700+ franchise locations with a lean corporate team. Any technology that lets a small team support that footprint more effectively — better, faster, with less variation — is directly valuable to the business. AI wasn't interesting as technology. It was interesting as a multiplier on organizational capacity.
That reframe changed the conversation from "what's our AI strategy" (a technology question) to "where are we losing leverage today" (a business question). The second question had much better answers.
Phase 2: Governance First, Products Second
Before shipping a single AI tool to users, I established the AI Governance Council and built out the compliance policy framework. This felt counterintuitive — the pressure to ship was real, and governance felt like overhead. But it was the right call, and here's why.
The practical outputs of the governance work were a tool vetting SOP, a vendor risk assessment process, and safe-adoption guidelines for the organization. Together, these answered the question that would otherwise block every future deployment: is this tool approved?
Governance isn't a blocker — it's a permission structure. When employees want to experiment with an AI tool, the answer is no longer "we're figuring that out" or "talk to IT." It's a defined process with a defined answer. When a business unit wants to pilot a new vendor, there's a checklist. When legal asks about data handling, there's documentation.
The governance framework reduced the friction on every subsequent deployment. Products that would have faced months of ad hoc security review got through in weeks because the framework established the questions in advance and gave teams a path to answers.
One tactical detail: the Governance Council included stakeholders from legal, IT, operations, and the franchise side of the business — not just technology leadership. That cross-functional composition meant the policy framework had legitimacy across the organization, not just inside the tech team.
Phase 3: Two Products, Two Tracks
With governance in place, we moved to execution. The strategy called for two distinct products targeting two distinct user populations with two distinct problems.
The corporate track: self-service data analytics. The internal analytics platform translated natural language questions into SQL queries against our data warehouse. The target user was a corporate employee who knew the business deeply but wasn't a data analyst. They had questions — about franchise performance, operational metrics, care outcomes — that required waiting for a data request to be fulfilled. The NL-to-SQL approach cut that wait to seconds.
The technical architecture centered on controlled schema exposure. You can't hand an LLM your entire database and expect coherent SQL — the model needs context about what tables exist, what the columns mean, and what queries are actually valid. We built a semantic layer that exposed relevant schema context to the model based on query intent, kept the surface area of the database the model could access tightly scoped, and validated generated SQL before execution. Piloting inside the corporate team before broader rollout gave us a feedback loop to tune query quality before the stakes were higher.
The franchise track: knowledge retrieval via M365 Copilot. The second product targeted franchise operators who needed reliable answers to operational questions — policies, procedures, compliance requirements — that were distributed across dozens of documents, wikis, and internal resources. The problem wasn't that the information didn't exist; it was that finding it reliably took too long and too often returned the wrong version.
The solution was a corporate knowledge retrieval agent deployed through Microsoft 365 Copilot, using RAG patterns on Azure AI with controlled indexing. "Controlled indexing" is the key architectural decision: not every document belongs in the retrieval corpus. We curated the index to include only authoritative, current sources — eliminating the retrieval noise that comes from indexing everything indiscriminately. A franchise operator asking about compliance requirements gets the current policy document, not a two-year-old version that happens to rank highly in an unmanaged corpus.
The M365 Copilot deployment channel mattered strategically. It met users where they already were instead of asking them to adopt a new tool. Adoption friction dropped significantly as a result.
Phase 4: Enablement at Scale
Shipping two products that work is not the same as shifting how an organization operates. For AI to have lasting organizational impact, the capability has to live in people, not just in systems.
I ran three parallel enablement programs: short-form training modules for broad exposure, deep-dive workshops for teams with immediate use cases, and an AI Champions cohort for cross-functional advocates.
The Champions cohort was the highest-leverage investment. The classic "train the trainers" model fails for AI because the technology changes faster than the training materials. What actually works is identifying people across the organization who are curious, credible with their peers, and motivated to experiment — then giving them structured support, early access, and a community with each other.
Champions don't just spread knowledge; they surface real use cases. The most valuable AI applications at Right at Home didn't come from the technology team — they came from franchise operators and corporate staff who understood the specific friction points in their work. The Champions program created a feedback channel from those people directly into the product roadmap.
The three-track structure also addressed a real organizational reality: different people need different depth. A brief explainer on what AI can and can't do is the right entry point for someone who uses email and spreadsheets. A hands-on workshop building prompt templates is the right entry point for someone who wants to experiment with tools. An ongoing cohort with shared challenges is the right structure for someone who wants to become a practitioner. Treating everyone as the same learner means reaching none of them well.
The Meta-Lesson: AI Strategy Is Product Management
Looking back at the full arc — alignment, governance, two products, enablement at scale — the frame that best describes what I was actually doing is product management.
An AI strategy is a product. The customer is the organization.
That means doing user research before building (what are the actual friction points?). It means shipping iteratively (the strategy document evolved through multiple versions as we learned). It means defining success metrics before you launch (what does good look like for this product?). It means treating governance not as a compliance checkbox but as a feature — the feature that gives users confidence to actually rely on the tools you build.
It also means accepting that the roadmap will change. The two products we shipped reflected what was possible and valuable given where the organization was when we started. A year from now, the roadmap looks different because the organization's AI literacy is different, the technology is different, and the problems we've solved have revealed adjacent problems worth solving.
The organizations that get the most value from AI aren't the ones that pick the best model. They're the ones that build the organizational infrastructure — governance, enablement, feedback channels, product discipline — to deploy AI reliably, improve it continuously, and scale it across a distributed network of people who actually use it.
That infrastructure doesn't build itself. Someone has to treat it like a product.