When using decision-support systems that are based on artificial intelligence (AI), humans often make poor choices. The failure of these systems to reward expectations about their utility have led to several such systems being abandoned. Although preliminary research indicates that the inability to communicate model output understandably to the humans who use the systems may contribute to this problem, it is currently unknown what specific changes in the way that AI systems communicate with users would be most likely to increase their success. In this blog post, I describe SEI research that is collecting data on actual human decision making to determine the most effective designs for AI-system interfaces within a chosen domain.
Widespread adoption and application of AI is still in its infancy. Therefore, the ways in which a system’s operation can be degraded or destroyed by human-AI interactions are not well understood. Recent failures of systems using AI illustrate this point. Here are two examples.
Identifying Lead-Based Hazards in Flint, Michigan
AI was used in the response to the 2019 water-supply crisis in Flint, Michigan. An AI system used public-records data, such as the year in which houses were constructed, to predict the locations of hazardous lead-based service lines to flag them for replacement. The system achieved better than 80 percent accuracy–lead was found in more than 80 percent of the potentially problematic service lines that the system identified, and the lines were replaced.
However, the system did not provide the capability for a homeowner or home dweller to look up a specific location and discover the status of service lines in that location. So, for example, a family might see the house next door being repaired and wonder why repairs had not been scheduled for their house. In this case, the system may have known that the service lines in their house had been replaced recently, and therefore did not contain lead. But the confusion and public outcry from poor
This article is purposely trimmed, please visit the source to read the full article.