PLUS: The latest AI and tech news.
By Jennifer Conrad | 06.28.21 A complication of infection known as sepsis is the number one killer in US hospitals. So it's not surprising that more than 100 health systems use an early warning system offered by Epic Systems. But, as Tom Simonite reports, a new study using data from nearly 30,000 patients in University of Michigan hospitals suggests Epic's system performs poorly. The authors say it missed two-thirds of sepsis cases, rarely found cases medical staff did not notice, and frequently issued false alarms. (The company disputes the findings, which were published last week in JAMA Internal Medicine.) Automated sepsis warnings could be very valuable to hospitals because key symptoms of the condition can have other causes, making it difficult for staff to spot early. Starting sepsis treatment such as antibiotics just an hour sooner can make a big difference to patient survival. The Bigger Issue The findings illustrate a broader problem with the proprietary algorithms increasingly used in health care. "They're very widely used, and yet there's very little published on these models," says Karandeep Singh, an assistant professor at the University of Michigan who led the study. "To me that's shocking." Read more about the study's findings here. PLUS: Listen to Tom Simonite discuss the alarming blind spots in health care AI on the WIRED Gadget Lab podcast. | The National Institute of Standards and Technology (NIST) is a federal agency best known for measuring things like time or the number of photons that pass through a chicken. Now NIST wants to put a number on a person's trust in artificial intelligence, Khari Johnson writes. The researchers say they want to help businesses and developers who deploy AI systems make informed decisions and identify areas where people don't trust AI. NIST views the AI initiative as an extension of its more traditional work establishing reliable measurement systems. Trust will be measured two ways: A user trust potential score may consider things about a person using an AI system, such as their age, gender, cultural beliefs, and experience with other AI systems. The second score, the perceived system trustworthiness score, will cover more technical factors such as whether an outdated user interface makes people call AI into doubt. Why Put a Number on It? The adoption of AI will slow or halt if users don't trust it, says NIST cognitive psychologist Brian Stanton. The agency's effort is a start, but researchers say future studies should look at factors like the impact of human emotion or whether information about an AI model can lead people to trust AI too much. Find out more about efforts to quantify trust in AI here. | |
Post a Comment