Imagine a world in which citizens seeking affordable housing could expedite the long waiting list process and improve their user experience by using Redfin or Zillow to search for approved, affordable housing. Imagine a world in which the impact of educational policy and grants could be traced from the federal level down to an individual county or school.
These are examples of how enhanced data sharing and open APIs could improve the citizen experience.
RELATED
The traditional, siloed approaches to data have long hindered citizen services. Without improved intra-and interagency data sharing, analyzing a complete view of the obstacles facing citizens and improving user experiences become Herculean tasks.
Thanks to advances in technology over recent years, data sharing with the goal of improving the citizen experience can be achieved, but policy and agency culture are inhibiting progress. Rejecting the traditional approach to data-sharing, and aligning on a share-first, productized approach, would allow agencies to better collaborate to meet citizen needs.
Data silos hinder citizen services
It’s an unfortunate reality that the nation’s greatest challenge in putting data to work to enable mission outcomes may have been avoidable. In most cases, we have allowed data to stay cloistered within mission silos for too long, which hampers informed decision-making.
For example, federal agencies responding to disasters may lack critical data points when making decisions on the ground. High quality, shared data is required for the best AI enabled insights, which would allow for more predictive versus reactive deployment of resources. Emergency efforts would benefit substantially from shared state or local data – data on the poverty level in the area, food deserts, accurate weather forecasts and more. Yet, the lack of a whole-of-government approach to data sharing stifles this.
Legacy systems were designed without considering the use of data outside of the transactional systems in which data is produced, long delaying the ability to refine it from a decontextualized, raw material state into fuel for analytics and AI. Traditional approaches drove issues with duplication and data dilution. It’s not uncommon for there to be multiple versions of the same data point within one agency – each uniquely enriched. This makes it challenging to then integrate that data securely and apply AI.
The solution relies on a whole-of-government approach to data stewardship. We must shift the question from should I share this data? to thinking how can this data be made observable, accessible, and explainable in a secure manner for the whole of government to use for broad societal benefit?
Treating data like a product
One promising solution along the path to fully data-enabled missions is a coordinated mindset shift toward treating data like a product. This framework requires data stewards to cater to the needs of all the user audiences that data may eventually attract.
This requires not only optimizing the data for the current program, but thinking ahead to make it understandable, convenient and secure for other approved entities to access – such as other federal agencies, research institutions and state programs.
A product mindset also requires that the data must operate broadly without causing problems. Even if that data point is shared cross-agency, it is set up from the start to ensure it doesn’t cause harm by exposing sensitive information to the wrong users. A product framework would incentivize agencies to clean up their data for broader use, driving new efficiencies within complex, overlapping federal missions.
While inspiring a cultural shift like this is challenging, technology can help.
Embracing Technology to Drive Cultural Change
The rise of data mesh patterns – which allow data to be shared without engaging in the expensive, laborious process of moving it to a specially constructed central repository – is a massive leap forward. It makes securely accessing data from another agency possible without moving it around, inspiring more confidence when pursuing data sharing.
Agencies can also start leveraging technology to assuage anxieties around losing control of program data. Historically, this has been a barrier to a share-first mindset. The data curator needs clarity around how the data is going to be used to feel comfortable sharing it.
Advancements in data usage tracking have made alleviating these concerns possible. New tech can track analytics around who is accessing the data, the data lineage, how it is curated and cleaned, the business and policy rules around it and more. Agencies can create governance structure and business rules around their data, gaining control over how the data is being used based on the parameters set. This helps reduce the friction around sharing and increases the free flow of information. As part of governance, agencies also need to establish data product owners and stewards as well as design data architectures with a product mindset.
Meanwhile, the fact that AI algorithms have gotten to the point where they can see patterns humans cannot perceive is also a win for data sharing. Introducing new data from diverse sources into an ecosystem provides a massive lift to AI algorithm accuracy and impact, allowing for expanded modeling techniques. The power of these algorithms makes data sharing more valuable than ever before, helping to increase demand and adoption.
Agencies cannot leverage new AI advancements to hone citizen programs and decision-making without fundamentally changing the way they think about their data. The fact that the technology is mature enough is the first step. Next, we must inspire a shift towards treating data like a product. This may require policy changes to drive adoption, as well as naming a civil-wide champion for this effort to act as a helpful forcing function.
The federal government is well-positioned to harness the value of intra-and interagency data sharing. It’s time to fully take advantage of it to benefit citizens.
Dan Tucker is a Senior Vice President at Booz Allen Hamilton focused on cloud and data engineering.