Balancing semantic model governance with self-service BI: centralized vs federated approaches

Our organization is struggling with the tension between semantic model governance and self-service BI enablement. We’ve seen massive model duplication - currently tracking 180+ semantic models in Power BI Service, many of which are slight variations of the same core business logic (Sales, Finance, HR). This creates inconsistent metrics across departments (three different definitions of “Active Customer” for example) and makes change management nearly impossible. We’re debating between a fully centralized approach (single certified semantic model per domain with strict change control) versus a federated model (departmental ownership with loose governance standards). Has anyone successfully implemented a hybrid approach that maintains data quality while still empowering business users? What governance frameworks and technical patterns have worked in enterprise environments?

The three different definitions of Active Customer is a classic problem. We solved this by creating a metrics repository - essentially a Power BI dataset that contains nothing but standardized measure definitions. Business users connect to this repository and import the measures they need, ensuring consistency. It’s implemented as a calculation group in a thin semantic model. Users can still create their own measures, but the certified ones are readily available and clearly marked. We also publish a data dictionary in SharePoint that documents every certified metric with business definitions, calculation logic, and ownership.

We implemented a “hub and spoke” model that works well. Central IT maintains certified core semantic models with standardized metrics and dimensions. These are marked as “Certified” and promoted in the Power BI Service. Business units can then create thin report layers or extend the core models with department-specific calculations, but they can’t modify the underlying certified metrics. This gives you consistency on core KPIs while allowing flexibility for specialized analysis. The key is clear ownership boundaries and a robust endorsement workflow.

Model duplication is often a symptom of inadequate semantic layer design. Instead of 180 models, you probably need 10-15 well-designed domain models (Sales, Finance, HR, Operations, etc.) with proper row-level security and perspective-based views. The challenge is getting stakeholders to agree on canonical definitions. We use a data governance council with representatives from each business unit to define and approve metric definitions. Once approved, these go into the certified models and become the single source of truth. Any deviation requires formal justification and council approval.

Based on your situation with 180+ models and inconsistent metrics, here’s a comprehensive governance framework that balances control with enablement:

Semantic Layer Architecture with Shared Models: Implement a three-tier architecture. Tier 1: Core certified semantic models (10-15 domain models) maintained by central IT with standardized dimensions and measures. These use composite models to connect to your data warehouse. Tier 2: Departmental models that live-connect to Tier 1 and add department-specific calculations without duplicating base data. Tier 3: Personal models for ad-hoc analysis, clearly marked as non-certified. This prevents metric inconsistency while enabling self-service - your three Active Customer definitions get consolidated into one certified measure in the core Sales model.

Model Certification and Endorsement Workflows: Establish a formal certification process with three endorsement levels: Certified (enterprise-wide use, IT maintained), Promoted (department-approved, domain steward maintained), and None (personal use only). Certification requires: semantic layer review, performance testing, security validation, and data governance council approval. Use Power BI’s built-in endorsement features to visually distinguish models in the Service. Create a certification checklist covering naming conventions, documentation standards, RLS implementation, and measure definitions. Only certified models appear in organization-wide search results.

TMDL Git Integration for Lifecycle Management: Implement TMDL (Tabular Model Definition Language) with Git-based version control for all certified and promoted models. Store model definitions in Azure DevOps or GitHub with branch protection on main. Changes flow through development → test → production branches with automated deployment pipelines. This provides: complete change history, ability to review and approve changes via pull requests, automated testing of model changes, and rollback capabilities. Business users can propose changes by submitting PRs, but merges require data steward approval. This technical rigor prevents unauthorized changes to certified models while maintaining transparency.

Center of Excellence Governance Structure: Establish a federated CoE with clear roles: Central IT provides platform governance, standards, and infrastructure. Domain Stewards (one per business domain like Sales, Finance, HR) manage domain-specific models and act as liaisons. Power Users in each department can create departmental models but must follow standards. Data Governance Council (cross-functional) approves metric definitions and resolves conflicts. This structure scales by distributing decision-making while maintaining standards. Conduct quarterly governance reviews to identify model duplication, consolidate redundant models, and update standards based on lessons learned.

Naming Conventions and AI-Readiness Standards: Implement comprehensive naming standards that encode governance information: [SCOPE][DOMAIN][PURPOSE]_[STATUS] where SCOPE is CORP/DEPT/USER, DOMAIN is business area, PURPOSE describes use case, and STATUS is Certified/Promoted/Draft. Example: CORP_Sales_Analysis_Certified versus USER_Sales_Experiment_Draft. This makes governance level immediately visible. For AI-readiness, require: descriptive measure names (“Total Sales Amount” not “Measure1”), complete descriptions for all measures and columns, proper data categories and formats, synonyms for natural language queries, and star schema design patterns. These standards enable Copilot and Q&A while maintaining quality.

Implementation roadmap: Month 1-2: Establish CoE structure and define standards. Month 3-4: Identify and certify 10-15 core domain models. Month 5-6: Implement TMDL/Git workflows and train domain stewards. Month 7-8: Migrate departmental models to Tier 2 architecture. Month 9-12: Deprecate duplicate models and enforce governance policies. This hybrid approach has reduced our model count from 200+ to 45 while actually increasing user satisfaction because they can find and trust the data they need.

Naming conventions are critical and often overlooked. We implemented strict naming standards that signal governance level: CORP_Sales_Certified for enterprise models, DEPT_Sales_Marketing for departmental models, and USER_Sales_Analysis for personal models. This makes it immediately clear what level of trust and governance applies. We also enforce semantic layer standards for AI-readiness - all measures have descriptions, all columns have proper data categories, and relationships follow star schema patterns. This prepares models for natural language queries and Copilot integration while maintaining quality.

TMDL and Git integration have been game-changers for our governance. We store semantic model definitions in Git repositories with branch protection and pull request workflows. Changes to certified models require code review by the data governance team before merging to main. This gives us version control, change tracking, and rollback capabilities. Business users can still create their own models, but if they want certification and broad distribution, they go through the PR process. It’s technical but it works - we’ve reduced model sprawl by 60% in six months.

Your Center of Excellence structure matters as much as the technical implementation. We established a federated CoE with domain stewards in each business unit who act as liaisons between IT and business users. These stewards are trained on semantic modeling best practices and have authority to certify models within their domain. Central IT provides the platform, standards, and oversight, but domain stewards handle day-to-day governance decisions. This scales much better than a purely centralized approach where IT becomes a bottleneck. We also run monthly governance reviews where we identify duplicate models and consolidate them.