Promoting responsible AI in health care

The path to responsible AI

At Kaiser Permanente, AI tools must guide our core mission of providing affordable, high-quality care to our members. This means AI technologies must demonstrate a “health return,” such as improved patient outcomes and experiences.

We evaluate AI tools for safety, effectiveness, accuracy and fairness. Kaiser Permanente is fortunate to have one of the most comprehensive data sets in the country, thanks to our diverse member base and powerful electronic health record system. We may use this anonymized data to develop and test our AI tools before deploying them to our patients, care providers, and communities.

We take care to ensure that the AI ​​tools we use support the delivery of evidence-based, equitable care for our members and communities. We do this by testing and validating the accuracy of AI tools in our various populations. We are also working to develop and deploy AI tools that can help us proactively identify and address the health and social needs of our members. This can lead to more equitable health outcomes.

Finally, once a new AI tool has been implemented, we continuously monitor its results to make sure it’s working as intended. We are vigilant; AI technology is advancing rapidly and its applications are constantly changing.

Policymakers can help set up guardrails

As Kaiser Permanente and other leading healthcare organizations work to advance responsible AI, policymakers also have a role to play. We encourage action in the following areas:

  • National AI Oversight Framework An oversight framework should provide an overall structure for guidelines, standards and tools. It must be flexible and adaptable to keep pace with rapidly evolving technology. New advances in AI occur every month.
  • Rules governing AI in healthcare Policymakers should work with healthcare leaders to develop industry-specific national standards to govern the use, development, and ethics of AI in healthcare. By working closely with healthcare leaders, policymakers can set standards that are effective, useful, timely, and not too prescriptive. This is important because too rigid standards can stifle innovation, which would limit the ability of patients and providers to experience the many benefits that AI tools can help deliver.

Railings: Progress so far

The National Academy of Medicine convened a steering committee to establish an AI Code of Conduct for Healthcare that is based on technology and healthcare experts, including Kaiser Permanente. This is a promising start to developing an oversight framework.

In addition, Kaiser Permanente appreciates the opportunity to be an inaugural member of the US AI Security Institute Consortium. The consortium is a multi-sector working group that sets safety standards for the development and use of AI, with a commitment to protecting innovation.

Considerations for policy makers

As policymakers develop AI standards, we urge them to consider a few important points.

  • Lack of coordination creates confusion. Government agencies should coordinate at the federal and state levels to ensure AI standards are consistent and not duplicative or conflicting.
  • Standards must be adaptable. As healthcare organizations continue to explore new ways to improve patient care, it is important that they work with regulators and policymakers to ensure that organizations of all sizes and levels of sophistication and infrastructure can adapt the standards . This will allow all patients to benefit from AI technologies while being protected from potential harm.

AI has enormous potential to help make our nation’s healthcare system more robust, accessible, efficient, and equitable. At Kaiser Permanente, they are excited about the future of AI and are eager to work with policymakers and other healthcare leaders to ensure that all patients can benefit.

#Promoting #responsible #health #care
Image Source : about.kaiserpermanente.org

Leave a Comment