Digital government teaching case study
Iterating towards Insight: Developing the Enterprise Health Approach to Improve Service Oversight for the Government of Canada
Christine Lau, Natasha D’Souza, Miranda Dyck
Treasury Board Secretariat of Canada, 2025
Introduction
In 2022, the Government of Canada (GC) faced major disruptions across its most visible public-facing services, including:
● Long lines and delays in passport processing1
● Large backlogs in immigration services (e.g. permanent residence, work permits, family reunification, and citizenship)2
● Extended wait times at Canadian airports3
● Overwhelmed call centers4
These challenges eroded public trust and created mounting pressures for the government to act swiftly and transparently. Although oversight mechanisms existed, it quickly became clear that they were not designed to respond to the scale and urgency of the situation. This highlighted the need for a more responsive approach.
The Service and Digital Performance Team within the Office of the Chief Information Officer of Canada (OCIO) saw an opportunity to do things differently.
Historically, federal service performance was tracked through reporting frameworks focused on compliance and process-centred metrics such as application volumes and whether service standards were set, let alone met. These reports offered limited insight into the quality of client experience, service outcomes, or root causes of service challenges.
These metrics were also designed to assess long-term plans and did not lend themselves well to providing the agility needed to respond to sudden or shifting needs. Existing approaches aligned with a waterfall model, one that prioritized upfront planning and following processes. In this model, course correction was often viewed as a sign of failure, not an opportunity for learning.
The 2022 crisis exposed the need for a more responsive, iterative, and data-driven model to service oversight: one that could adapt to changing realities and would require the public service to work differently.
Context: Roles of the Central Agencies and Departments in Service Delivery
In Canada, delivering government services is a shared responsibility between departments and central agencies. Departments serve Canadians and businesses directly, while central agencies provide guidance and oversight.
The Treasury Board of Canada Secretariat (TBS)6 plays a central role in shaping expectations for service performance improvement and digital transformation. From 2003 to 2023, TBS used the Management Accountability Framework (MAF) as its primary oversight tool to assess how well federal organizations delivered on their mandates. While MAF was effective at enforcing accountability and compliance, it did not encourage Deputy Heads (senior officials responsible for leading departments) to think holistically about service or enterprise learning.
Within TBS, the OCIO provides direction on service delivery, digital policy, and performance measurement. It collects data related to IT management, cybersecurity, service performance, and digital talent. The Service and Digital Performance Team, within OCIO, plays a central role in this oversight function. The team monitors departmental service performance data, surfaces cross-cutting risks, and works with departments to resolve issues.
Departments and agencies are, by contrast, on the front lines of service delivery. For example, Immigration, Refugees, and Citizenship Canada (IRCC), Economic and Social Development Canada (ESDC), and the Canada Revenue Agency (CRA) deliver services ranging from immigration processing and income support to tax administration. While they have different mandates, they share similar challenges, such as aging IT infrastructure and fluctuating service volumes. Internal services such as human resources, procurement and IT are equally important in enabling departments to function and deliver effectively.
It is often when something goes wrong with a service that the distinction between service delivery and oversight becomes especially important. During a service disruption, departments are focused on implementing solutions to solve the issue and communicating with affected clients. Central agencies like TBS and the Service and Digital Performance Team take a broader view and focus on identifying root causes, coordinating interdepartmental efforts, and briefing senior leadership. These clearly defined roles for delivery and oversight help to ensure that urgent service issues are addressed, while broader system-level issues are analyzed and resolved.
Recognizing the Need for Iteration
The service disruptions of 2022 made one thing clear: the government needed faster ways to understand where problems were emerging and how to respond. It wasn’t just about having more data; it was about having the right information at the right time. The government needed the ability to learn and adjust, including across departmental lines.
In June 2022, the Prime Minister announced the creation of a new Ministerial Task Force to address the urgent service issues, with an initial focus on reducing passport and immigration delays and addressing congestion at Canadian airports. While this led to immediate interventions, it also highlighted a broader need for a more system-wide and proactive approach to support service oversight.
The service issues made it clear that designing perfect plans or formal reports wasn’t enough. The situation demanded faster feedback loops, learning through action, and space to test next approaches. This laid the foundation for what would become the Enterprise Health Approach, led by the Service and Digital Performance Team, to monitor sudden and high-risk changes, identify emerging trends in the service landscape across government, enabling earlier interventions.
The Public Servant’s Dilemma
At the height of the 2022 service disruptions, decision-makers lacked timely, enterprise-wide insight into where pressures were mounting and how to respond. During this time, the Service and Digital Performance Team recognized the need to take a different approach.
Given their oversight role for the Policy on Service and Digital, the team had a cross-government view of programs and services, positioning them to spot trends or risks. The task at hand was to take in various fragmented pieces of intelligence, such as annual reports, siloed datasets, and departmental updates, and create a product that would tell a story for decision-makers.
In doing this, they also had to navigate long-standing perceptions of TBS as a compliance enforcer and build trust with departments and agencies as a partner in problem-solving.
As a starting point, the team had access to the GC Service Inventory, a dataset covering over 1,600 services across 79 departments and agencies. While it included key service descriptors such as volume, service type, and online usage, as well as some performance data like service standards, it only contained annual snapshots. There was little real-time information about surges in demand or departments’ capacity to respond. Other relevant data on human resources, financial management, IT investments, and cyber security existed in silos and were difficult to connect to each other.
The Service and Digital Performance Team was in a balancing act. On one side, decision-makers needed better insights into the causes of service issues and how to respond to them. On the other side, departments could not afford to refocus staff on reporting exercises. All of this needed to be done while building a trusting partnership with departments and not rushing into any solutions. The final product also needed to be lightweight, scalable, and open to iteration to allow for ongoing feedback from departments.
Given this, the team explored three possible options, each with its own trade-offs in terms of reporting burden, quality of insights, and potential for wider use.
Option 1: Maintain the Status Quo
The simplest option was to stay the course and continue using existing oversight mechanisms such as the MAF and the annual GC Service Inventory, without requiring additional data from departments. These exercises were familiar to departments and would continue to add to the information that TBS was already collecting.
On the surface, this approach presented the fewest disruptions. It would avoid placing more burden on departments already focused on solving operational issues. It would maintain existing, familiar reporting cycles and processes. It would also mean a lower risk of resistance from departments tired of oversight activities from the center.
However, as the team considered this option, the limitations quickly became obvious. The data would always be retrospective, compliance-oriented and siloed by policy area. This type of information could describe what happened in the past, but would not indicate emerging service issues. Given the crisis environment at the time, this solution was not good enough.
While this option offered predictability, it risked reinforcing a reactive oversight model that identified problems after the fact.
Option 2: Create Ad Hoc Reporting Requests
Next, the Service and Digital Performance Team considered the possibility of creating ad hoc reporting requirements. Once again, this option seemed reasonable on the surface. It would involve asking for targeted and time-sensitive data from specific departments to address specific questions as they arose, rather than relying solely on historical data or building an entirely new framework. It would allow the team to zoom in and generate insights on an as-needed basis instead of building a one-size-fits-all type model.
There were some clear advantages. The model offered flexibility and enabled targeted intervention efforts. By focusing on specific issues, ad-hoc requests could provide a timely and focused understanding of service issues, allowing central agencies to respond to disruptions with a higher degree of specificity. This is what happened with the passport crisis, when different teams across the government and central agencies were mobilized to help find solutions.
The team was also aware that this approach came with substantial risks. Ad hoc requests add a sudden reporting pressure on departments already operating under considerable strain. It also remained a reactive approach, focusing on disruptions only as they emerged and not supporting early detection of issues. In the end, this approach would not enable a broader understanding of systemic risks or service capacity across the federal government.
Option 3: Develop a Flexible Framework focused on Highest Impact and Highest Risk
The third option was to build a new oversight approach that would start small, focus on the government’s highest impact and highest risk (HIHR) services, and could be expanded over time. Rather than trying to build a comprehensive or prescriptive reporting framework overnight, the idea was to start with something contained and functional and evolve it over time.
To do this, the Service and Digital Performance Team would start with a small set of indicators — application volume, performance against service standards, backlog levels and client satisfaction — to be collected quarterly from a few key departments. These were chosen because they could provide a basic picture of service performance and also aligned with best international practices. This framework would not solve everything, but offered a pathway to build insight over time, through collaboration and learning with departments.
Nonetheless, it was possible that some departments would still view this as another layer of oversight and reporting, especially given their existing experience with central agencies and the prevailing opinion that data collected was not used meaningfully. There was also the possibility that the effort could be seen as duplicative of other initiatives, such as the Ministerial Task Force.
Despite these very real risks, this option offered something the others didn’t: the ability to act promptly without pretending to have all the answers. It reflected a strategic choice to model the very principles (iteration, co-design and responsiveness) that departments were expected to adopt in their service delivery.
This choice also reflected a bigger change. It would be an opportunity to not only build new tools, but to rethink how oversight could be done more generally. TBS would be shifting away from static, compliance-based evaluation frameworks like MAF towards a more responsive model, based on real-time learning and user insights.
While the ultimate goal was always to improve service delivery, this new approach would also improve how TBS monitors and supports departmental performance. For instance, the quarterly data requests would create a regular feedback loop and encourage departments to review their service performance and take steps to make improvements more frequently. Further, it would prove that oversight itself could be reimagined as an iterative process, i.e., starting small, building trust, and adjusting based on feedback from departments.
This user-informed, iterative design would become the foundation for the Enterprise Health Approach, creating a collaborative, evolving oversight model designed to identify systemic service risks and enabling a better government-wide response.
Actions and Reactions
After considering the three options, the Service and Digital Performance Team decided to move forward with Option 3 and began co-developing what would become the Enterprise Health Approach with departments. The goal was to develop a relatively lightweight tool that could support more proactive, data-informed conversations across government about service delivery health, risks, and opportunities for improvement. The team recognized that a new oversight model would need to balance providing timely insights with the realities of departmental capacity. Building trust was also a key priority to avoid being seen as just another burden without adding value.
First Version
The Service and Digital Performance Team developed a prototype dashboard to visualize key indicators for 25 high-impact, high-risk services, delivered by 10 federal organizations. The initial set of services was selected based on volume, impact on vulnerable populations, life events, political priorities, significant planned investments, and prior service disruptions.
This prototype drew primarily on data already available in the annual GC Service Inventory and other internal sources and was based on best international practices for service monitoring. It was a working draft, designed to test assumptions, gather feedback and evolve over time.
Initial reactions were mixed. Some organizations raised concerns about the intent behind the dashboard and that the datasets the team had linked were inaccurate or mapped without consultation. These concerns reflected ongoing frictions between TBS as a central agency and departments’ perception of TBS overreach.
At the same time, several departments saw value in the early prototype and provided constructive feedback about the methodology, indicators, and potential uses. These conversations laid the groundwork for future collaboration.
Second Version
In response to this feedback, the team developed a second version of the dashboard, reflecting the ongoing discussions with departments and colleagues across TBS. This version incorporated quarterly data provided directly by the participating departments. This allowed the indicators to show more trends and display more clearly what insights could be obtained.
During this phase, the list of HIHR services was also refined through dialogue with departments. Work also continued refining the indicators by talking to other TBS policy centers to better understand the quality of the data and how it could be displayed. Despite these improvements, some executives and program managers continued to express concern about the purpose of the tool and whether it imposed an additional administrative burden. There were still perceptions that TBS was monitoring for compliance rather than enabling improvement.
Nonetheless, many departments kept providing feedback about the indicators, even asking for new features such as the ability to annotate the dashboard with qualitative context or explanation of changes.
Third Version
By the third iteration, the Enterprise Health Approach had evolved beyond a dashboard. It included quarterly discussions with each participating department to understand operational risks, capacity gaps, and broader management challenges. These conversations allowed departments to add context to the data, surface emerging issues, and explore opportunities for intervention.
More departments started to recognize the benefits of a more proactive and data-informed oversight. The Service and Digital Performance Team built momentum by:
● Demonstrating the ability to provide tailored insights
● Highlighting patterns across departments
● Framing the dashboard as a tool to support shared decision-making
In the meantime, the Service and Digital Performance Team also started to consult other TBS policy centers to understand their datasets and potential indicators to add to the dashboard. Indicators such as IT application health and cybersecurity maturity were identified as possible additions.
Further Reflections
The service disruptions of 2022 exposed gaps in traditional oversight models and highlighted limitations in how the public service learns and adapts. They underscored the need for a more agile, responsive, and enterprise-wide approach to service performance oversight that viewed iteration and detecting emerging risks as strengths.
In this context, the idea of iteration wasn’t just a design choice; it was a necessity. The Service and Digital Performance Team did not try to launch a perfect framework. Instead, they worked in the open, tested ideas, and evolved their approach based on input from departments and internal colleagues.
Early reactions to the Enterprise Health Approach were mixed. Some departments questioned whether it would add value or simply introduce additional oversight. It took time for the team to show that oversight and working with TBS could be enabling and not punitive. The team needed to demonstrate its willingness to engage differently. They listened to feedback and adjusted the model based on departmental needs or realities.
This case highlights two distinct but connected dimensions of iteration. First, it reflects a shift from static, compliance-based evaluation frameworks toward more responsive, user-informed service improvements. Second, it reframes oversight as an iterative process of starting small, building trust, and adapting over time. In both respects, moving away from rigid, prescriptive models and toward learning-based approaches became central to enabling real, sustained improvement.
Conclusion
The Enterprise Health Approach illustrates what is possible when central agencies lean into collaboration, stay accountable, and treat iteration as a strength. It shows that it is possible for service performance oversight to evolve from a compliance-driven exercise into a useful tool for insight and improvement. However, achieving and sustaining this shift took nearly two years of relationship building, co-design, and prototyping.
A lesson is that lasting change is enabled, not enforced. The Enterprise Health Approach embodies this principle by deliberately building in flexibility and responsiveness. It wasn’t launched as a finished product, but as a starting point for continuous improvement, shaped by feedback and lived realities.
The approach also helped departments connect the dots between service outcomes and enabling functions such as staffing, IT systems and other internal policies or tools. This helps with building a better understanding of the root causes of service delivery challenges.
While this way of working may feel unfamiliar, it represents a critical step toward building a more resilient, adaptive, and client-focused public service. The government is beginning to show up differently by demonstrating it can learn, adapt, and respond to the complex challenges it faces as it delivers services to Canadians.
References
Lineups, wait times for passport renewal soar as pandemic restrictions end | Globalnews.ca
New IRCC Update: Backlog Persists With 2.2 Million Files In Processing
Federal government moves to fix airport delays that are cramping tourism's comeback | CBC News
Pandemic blamed as EI recipients wait months for payments | Toronto Sun