author photo
By Cam Sivesind
Wed | Feb 21, 2024 | 4:33 AM PST

Senator Ron Wyden, D-Ore, recently proposed the Algorithmic Accountability Act, legislation that would require companies to assess their automated systems for accuracy, bias, and privacy risks. This includes artificial intelligence (AI) and machine learning (ML) systems that are increasingly used in healthcare.

For example, AI can analyze medical images to detect cancer or other diseases. It can also process insurance claims, schedule appointments, and even recommend treatment plans. The benefits are enormous. AI can free doctors to focus on patients rather than paperwork, lower costs, and even save lives by detecting diseases early.

However, there are also risks. AI systems can reflect and amplify real-world biases. Algorithms trained on limited or skewed data may discriminate against minorities and other groups. And if the software makes an incorrect diagnosis or recommendation, it could endanger patients. Even if AI is 99% accurate, that 1% could be life threatening.

"From my viewpoint in AI governance, the Algorithmic Accountability Act represents a crucial stride toward aligning AI innovation with ethical responsibility. It serves as a thoughtfully designed framework for fostering a healthier AI and machine learning ecosystem," said KJ Haywood, Principal Chief Executive Officer at However, while the efficiency gains for Medicare and Medicaid are undeniable, we must acknowledge the looming shadows of bias and oversight deficiencies. The Act serves as a hopeful compass, guiding us toward responsible AI adoption that champions fairness and privacy. Our pivotal task lies in harmonizing innovation with robust security safeguards."

Here is Sen. Wyden's full statement before the U.S. Senate Committee on Finance from February 8, and a snippet from it:

"There's no doubt that some of this technology is already making our health care system more efficient. But some of these big data systems are riddled with bias that discriminate against patients based on race, gender, sexual orientation, and disability. It's painfully clear not enough is being done to protect patients from bias in AI."

Wyden's legislation aims to balance innovation with accountability. It does not ban or impose major restrictions on AI; rather, it requires regular impact assessments so that problems can be addressed quickly. Healthcare organizations would need to evaluate their AI for accuracy, fairness, and security—similar to testing new drugs for safety and efficacy before they reach patients.

This transparency could build public trust in healthcare AI. With rigorous testing and monitoring, these systems would probably improve outcomes and access for all groups. Healthcare innovators would also know their work is ethically and socially responsible.

"By embracing transparency and relentlessly pursuing equitable care, we can ensure that AI becomes a potent ally in healthcare—one that benefits all patients equally," Haywood said. "While I encourage the enthusiasm for equitable AI governance, it's also vital to overemphasize that security must remain front and center as AI and ML continue to be integrated into our healthcare systems and devices."

Haywood recently wrote an article on "Watch Out! Governance Is in AI Waters – Do You Have Your Life Jacket?" for SecureWorld News.

In a field as critical as healthcare, oversight helps ensure technology healing more than harming. The Algorithmic Accountability Act would allow for AI's benefits while empowering experts and officials to address risks proactively. With thoughtful safeguards in place, healthcare organizations can deploy AI confidently and conscientiously to enhance patient care.

"The fact that we acknowledge real risks of AI is the potential for business decisions to be made from data coming out of it that could have significant bias or inaccuracies," said Rick Doten, VP of Information Security at Centene Corporation. "What I like is (this subject matter) focuses on challenges with AI from data quality standpoint (incoming), as opposed to the FUD of risks talking about people posting sensitive data (outgoing) to AI."

Doten will be speaking on "Understanding How Principles of Threat Intelligence Can Improve Use of AI" as the closing keynote at SecureWorld Charlotte on April 10.

Here are some additional public comments and perspectives on the cybersecurity risks of using AI in healthcare:

  • The American Medical Association (AMA) has raised concerns about the privacy and security of patient data used to develop healthcare AI models: "The [AMA] encourages stewardship and responsible use of health data to train algorithms that support clinical decision-making while ensuring privacy and security."
  • A viewpoint piece in JAMA Network argues that AI systems are vulnerable to data poisoning attacks and adversarial examples. The authors recommend security measures such as encryption, auditing, and red team testing before AI deployment in clinics.
  • Stanford computer scientists have published research showing how medical imaging AIs can be fooled by small image perturbations. They warn healthcare organizations to rigorously vet these systems since any hacking or manipulation could endanger patients.
  • Cybersecurity firm Darktrace reports that hackers are increasingly targeting healthcare AI with data poisoning, model theft, and adversarial attacks. They recommend best practices such as monitoring for data anomalies, keeping models updated, and conducting penetration testing.
  • HealthITSecurity writes that while AI holds promise for improving care, it also expands the attack surface for cybercriminals. They suggest oversight frameworks are needed so patient well-being doesn't suffer due to AI vulnerabilities.
  • An NHS digital leader commented that "AI is only as secure as the data used to train it," and stressed the need for governance and testing before healthcare algorithm deployment.

The key theme across these perspectives is balancing innovation with responsibility—leveraging AI to improve care while proactively addressing risks through security controls, testing, and governance. Oversight frameworks like Sen. Wyden's proposal could help achieve this balance.

RELATED: This U.S. NIST post from Feb. 15, "NIST Researchers Suggest Historical Precedent for Ethical AI Research," talks about how training AI systems on biased data is problematic.

To learn more and connect with cybersecurity leaders across the healthcare and medical sector, attend the SecureWorld Healthcare virtual conference on May 1, 2024. See the agenda and register for free here

Comments