Fujitsu and Hokkaido University Develop Explainable AI Technology
Fujitsu and Hokkaido University today announced the development of a new technology based on the principle of explainable AI that automatically presents users with steps needed to achieve a desired outcome based on AI results about data, for example, from medical checkups. Explainable AI represents an area of increasing interest in the field of artificial intelligence and machine learning. While AI technologies can automatically make decisions from data, explainable AI also provides individual reasons for these decisions—this helps avoid the so-called black box phenomenon, in which AI reaches conclusions through unclear and potentially problematic means.
In medical checkups, AI can successfully determine the level of risk of illness based on data like weight and muscle mass . In addition to the results of the judgment on the level of risk, attention has been increasingly focused on explainable AI that presents the attributes that served as the basis for the judgment. Because AI determines that health risks are high based on the attributes of the input data, it’s possible to change the values of these attributes to get the desired results of low health risks.
While certain techniques can also provide hypothetical improvements one could take when an undesirable outcome occurs for individual items, these do not provide any concrete steps to improve.Ultimately, this new technology offers the potential to improve the transparency and reliability of decisions made by AI, allowing more people in the future to interact with technologies that utilize AI with a sense of trust and peace of mind. Further details will be presented at the AAAI-21, Thirty-Fifth AAAI Conference on Artificial Intelligence.