As we move from traditional machine learning to deep learning models, artificial intelligence is becoming more of a black box. Gone are the days when we could think of AI models as simple decision trees, where the determined outcome is succinctly explained by the collection of branches taken at each fork in the tree. Today’s AI algorithms often involve thousands or even millions of connections, making decision traceability near impossible.

For federal agencies, these decisions have the potential to directly impact people’s lives, which is why the mystery of how an AI algorithm reaches its conclusions makes many government analysts uncomfortable. They need assurance that the insights provided by their AI models are accurate, but also explainable so they can make informed decisions.

Let’s take a look at three strategies you can use to increase your comfort level in working with AI, so that you are better positioned to take advantage of its capabilities.

Have humans participate in AI model training

Just because you’re deploying AI doesn’t mean you will — or should — go the full automation route. In fact, for AI to be effective, it needs human input to learn and understand whether or not the conclusions being drawn are correct.

Periodically, you should provide feedback that details what worked and, more importantly, what did not. Incorrect evaluations or “negative examples” should be fed back into the AI training process. Providing this additional training data helps the model learn. The AI algorithm learns from this training data, allowing it to refine and improve the way it evaluates future data.

By inserting yourself into the continuous feedback process, you’ll have the opportunity to refine your trained models, evaluate results and data, and watch the model learn. You’ll get more comfortable with the AI process, and more confident in the technology as you watch it become more accurate and dependable.

Look for opportunities where AI can augment your team

There’s little doubt that the U.S. federal government is one the largest producers and consumers of data in the world but, in many cases, your team of analysts and data scientists may be only reviewing a small fraction of the information at their disposal. If that’s the case, they’re missing a lot of untapped data that could prove valuable in helping their agencies reach their mission objectives.

Consider the example of image or video analysis. While an analyst can only review a limited amount of this data, an AI system could review all of the images and flag a subset for further human analysis. The human-machine collaboration works at the scale and speed of government data and helps uncover insights that would have otherwise gone completely undiscovered.

And while the results may not be 100 percent accurate, using AI systems to review previously untapped data will likely lead to more actionable information seen by analysts and better mission outcomes.

Let the models show you what they’re thinking

While tracing decisions through deep learning models remains near impossible, new techniques are being developed that can provide you with greater insight into the processes as they are occurring. Indeed, the field of “explainable AI” is rife with research attempting to shine a light on the inner workings of deep learning and neural networks.

Efforts like the Defense Advanced Research Projects Agency’s Explainable Artificial Intelligence (XAI) Project are designed to help war fighters “understand, appropriately trust, and effectively manage an emerging generation of artificially intelligent machine partners.” This type of research is leading to breakthroughs that will ultimately provide you with clearer insights into how AI comes to certain conclusions.

For example, you’ll be able to more clearly discern how certain a model is about its outcomes, or what factors led to those outcomes. With this information in hand, you’ll be able to decipher the decisions made by the AI algorithm and trace mistakes or inaccurate recommendations back to their root causes, and develop remediations through model adjustments or improved training data.

An exciting example in the field of explainable AI is Bayesian Deep Learning. Similar to Bayes statistics, which allows one to quantify our belief in a prediction, i.e., the likelihood the prediction is accurate, and adapt our beliefs based on evidence of the prediction being true or false, Bayesian Deep Learning provides a statistical framework for a model to indicate its uncertainty in the predictions it makes, and even indicate when the uncertainty is too large to make a prediction.

While there is a desire for AI to be completely explainable, there is also a growing understanding that trust in AI models is built over time. As analysts work more closely with AI systems, both understanding and trust will grow, even without complete visibility into the AI black box.

Sean McPherson is deep learning scientist at Intel.

Share:
In Other News
Load More