Interpretable artificial intelligence: 4 key industries

Interpretable artificial intelligence: 4 key industries

Written By: yotoon Created Date: 2019-12-13 Hits: 233 Comments: 0

Interpretable artificial intelligence allows people to understand how artificial intelligence systems make decisions, which will be key in the medical, manufacturing, insurance, and automotive fields. So what does this mean for the organization?

For example, Spotify, a streaming music service platform, plans to recommend songs by singer Justin Bieber, but recommends Belieber's songs, which is obviously a bit disturbing. This does not necessarily mean that programmers at Spotify websites must ensure that their algorithms are transparent and easy to understand, but people may find this a little off target, but the consequences are obviously trivial. 

This is a touchstone of interpretable artificial intelligence-that is, machine learning algorithms and other artificial intelligence systems that can produce results that humans can easily understand and trace back to their origins. The more important the results based on artificial intelligence, the greater the need for interpretable artificial intelligence. In contrast, relatively low-risk artificial intelligence systems may only be suitable for black box models, and it is difficult to understand the results.

 "If artificial intelligence algorithms are not effective enough, such as songs recommended by music service companies, then society may not need to monitor these recommendations," said Dave Costenaro, director of artificial intelligence research and development at Jane.ai. 

One can live with the app's misunderstanding of its musical taste. But it may not be able to tolerate more important decisions brought about by artificial intelligence systems, perhaps in the context of proposed medical treatment or refusal to apply for a mortgage. 

These are high-risk situations, and especially in the case of negative results, one may need to clearly explain how a particular result is achieved. In many cases, auditors, lawyers, government agencies and other potential parties will do the same.

 Costenaro said that as responsibility for specific decisions or outcomes shifts from humans to machines, the need for interpretability will also increase. 

Costenaro said, "If algorithms have put humans in this loop, human decision makers can continue to take responsibility for interpreting the results." 

He illustrated a computer vision system that pre-labeled X-ray images for radiologists. "This can help radiologists work more accurately and efficiently, but will ultimately provide diagnosis and explanation," he said. 

IT's AI Responsibility: Explaining the Reason

However, as artificial intelligence matures, people may see more and more new applications gradually relying on human decisions and responsibilities. The music recommendation engine may not have a particularly significant liability, but many other real or potential use cases will face significant liability. 

Costenaro said, "For a new class of artificial intelligence decisions, these decisions have a high impact, and because of the speed or amount of processing required, humans can no longer participate effectively, and practitioners are working hard to find explanations. Algorithmic methods. " 

IT leaders need to take steps to ensure that their organizations' AI use cases properly include interpretability when necessary. Gaurav Deshpande, TigerGraph's vice president of marketing, said that many corporate CIOs are already concerned about this issue, and even if they understand the value of a particular artificial intelligence technology or use case, they usually have some hesitation. 

Deshpande said, "But if you can't explain how the answer was reached, you can't use it. This is because of the risk of bias in 'black box' artificial intelligence systems, which can lead to litigation, significant liability to corporate brands and balance sheets And risk. "

This is another way to think about how and why companies are adopting interpretable artificial intelligence systems rather than operating black box models. Their business may depend on it. People's claims about artificial intelligence bias can be misleading. In higher-risk situations, similar requirements can be quite severe. And that's why explainable artificial intelligence could be the focus of business applications in machine learning, deep learning, and other disciplines. 

The role of explainable artificial intelligence in four industries 

Moshe Kranc, chief technology officer of Ness Digital Engineering, explores the potential use cases of interpretable artificial intelligence, saying "any use case that affects people's lives can be affected by prejudice." The answer is simple and far-reaching. 

He shared some examples of more and more decisions that may be made by artificial intelligence, but this fundamentally requires trust, auditability and other characteristics that explain artificial intelligence: 

• Participate in training programs 

• Decide whether to insure someone and how much to insure 

• Decide whether to issue a credit card or loan to someone based on demographic data 

With that in mind, various AI experts and IT leaders have identified industries and use cases that can explain the essentials of AI. The banking industry is a good example. It can be said that interpretable artificial intelligence is very suitable for machines to play a key role in loan decisions and other financial services. In many cases, these uses can be extended to other industries, the details of which may vary, but the principles remain the same, so these examples may help to think about explainable AI use cases in your organization.

Health care industry 

The demand for interpretable artificial intelligence and human impact will rise in tandem. Therefore, the healthcare industry is a good starting point, as it is also an area where artificial intelligence can be very beneficial.

Kinetica CEO Paul Appleby said, "Using interpretable artificial intelligence machines can save medical staff a lot of time, allowing them to focus on medical interpretive work rather than repetitive tasks. They can give each patient simultaneously More attention. Its potential value is great, but it needs a traceable explanation provided by interpretable artificial intelligence. Interpretable artificial intelligence allows machines to evaluate the data and draw conclusions, but at the same time provide doctors or nurses with decision data, To understand how to reach that conclusion, and in some cases come to different conclusions, which requires humans to explain their nuances. " 

Keith Collins, executive vice president and chief information officer of SAS, shared a specific practical application. "We are currently studying cases where doctors use artificial intelligence analysis to help detect cancer lesions more accurately. The technology can act as a doctor's virtual assistant and explain how each variable in a magnetic resonance imaging (MRI) image has Helps identify suspicious areas that are likely to cause cancer, while others are not. " 

2. Manufacturing industry 

When diagnosing and repairing equipment failures, field technicians often rely on "tribal knowledge." 

Heena Purohit, senior product manager for the IBM Watson IoT, points out that in the manufacturing industry, field technicians often rely on "tribal knowledge" when diagnosing and repairing equipment failures, as do some industries. The problem with tribal knowledge is that team members change frequently, sometimes even dramatically: frequent personnel movements can change their expertise, which is not always recorded or transferred.

Purohit said, "Artificial intelligence-driven natural language processing can help analyze unstructured data such as equipment manuals, maintenance standards, and structured data such as historical work orders, IoT sensor readings, and business process data to suggest that technicians should Best recommendations for following prescriptive guidance. " 

This does not eliminate the value of tribal knowledge, nor does it weaken human decision-making. Instead, it is an iterative and interactive process that helps ensure that knowledge is stored and shared in an actionable manner. 

Purohit explains, "In this case, we showed the user many possible options for repair guidance suggestions driven by artificial intelligence, and the confidence interval for each response is a possible answer. The user gets each option, This helps to continue the learning process and improve future suggestions. This way we do n’t just give users a single choice, we allow users to make informed decisions between options. For each suggestion, we also show the user Knowledge graph output is an advanced feature and input used during the AI ​​training phase to help users understand parameters about why results are prioritized and scored accordingly. " 

3. Insurance industry 

Just like the healthcare industry, artificial intelligence can have a profound impact on the insurance industry, but trust, transparency, and auditability are absolutely necessary. 

Cognitivescale founder and CTO Matt Sanchez said, "AI has many potential use cases in insurance, such as customer acquisition, agency productivity, claims prevention, underwriting, customer service, cross-selling, policy adjustments, and increased risk and Compliance. "He noted that a recent Accenture survey found that most insurance executives expect AI to revolutionize their industry in the next three years. 

But this is definitely an area with considerable influence. These effects can only be felt by considering key insurance categories such as life, homeowner, health, employee compensation, and more. Sanchez said that interpretable artificial intelligence will be very important; people are advised to think about these issues, and each one applies to other areas as well: 

• Can artificial intelligence explain how it gains this insight or result? 

• What data, models and processes were applied to obtain its results? 

• Can regulators access and understand how this artificial intelligence works? 

• Who is visiting what and when? 

4. Self-driving cars 

Interpretable artificial intelligence should ultimately be the greatest value that artificial intelligence can provide. 

Stephen Blum, chief technology officer and co-founder of PubNub, said, "Understanding why AI services make certain decisions, or how to gain some insight, is critical to better integrating AI services for AI practitioners. Important. For example, how an artificial intelligence system for a self-driving car will structure the way it interacts with the vehicle will pose a great risk to the occupant because it means that it is a matter of life and death. " 

In fact, autonomous vehicles are undoubtedly an emerging field where artificial intelligence plays an important role, and explainable artificial intelligence will become its most important field. 

Kinetica CEO Appleby explained the importance of this situation. "If an autonomous vehicle finds itself in an unavoidable danger, what should it do? Give priority to protecting passengers while putting pedestrians at risk? Or endanger passengers in order to avoid hitting them?" 

So getting answers to these questions is not easy. But this will give a very straightforward conclusion: the black box model of artificial intelligence does not work in this case. It must be explained clearly to passengers and pedestrians, not to mention car manufacturers, public safety officials and other relevant personnel. 

Appleby said, "We may not agree with the response of autonomous vehicles, but we should understand the ethical priorities it follows in advance. Through data governance established within the enterprise, automakers can track and explain models from decision point A. Tracking the dataset to the Z-point makes it easier for them to assess whether these results are in line with their ethical stance. Similarly, passengers can also decide whether they are willing to ride in a vehicle designed to make certain decisions. " 

This may be a harsh reality, but there is also a basic principle, which includes scenarios that are not life and death. Interpretable artificial intelligence is an improved and optimized artificial intelligence technology, which is another way for IT leaders to think about artificial intelligence. 

"If the AI ​​system goes wrong, its builders need to understand why it does it so it can be improved and fixed. If their AI service exists and runs in a black box, they can't understand how to debug and improve it. "

Related Articles


Write a review

Please login or register to review