## Project Support: Banner Health Medical Informatics, UACOM, and UA Data Science Institute
## Email: ajay.perumbeti@bannerhealth.com; aperumbe@arizona.edu
## Program: UACOM-P/Banner Health Systems Clinical informatics; UA Data Science Institute
## Project Title: Preparing Physicians for Artificial Intelligence Tools in Medical Practice.
## Preparing Banner Physicians for Artificial Intelligence Product Deployment in Medical Practice
## Background
The Artificial Intelligence (AI) products in the healthcare market are forecasted to have the capability of reducing US healthcare spending by 5-10% (200-360 billion dollars), driving accelerated interest and adoption (1). This has been turbocharged with generative AI applications, such as ChatGPT, astonishing ability to summarize and generate appropriate and human like responses when interacting with people. The acceleration of development of healthcare AI have forced the U.S. government, and regulatory agencies worldwide to respond with guidance to ensure medical safety and ethical use (2,3).
Although automation of healthcare processes and decision-making with AI is anticipated to increase both efficiency and improve patient outcomes, it can also be error-prone, susceptible to sudden failure, and biased (2-5). These issues make it critical to arm healthcare providers with knowledge of how to best use AI products and ensure transparency, realistic expectations, and patient safety.
The challenge of AI education for healthcare providers, particularly physicians, is they are overburdened with clinical and administrative duties. Introducing additional general AI education such as AI concepts e-learning may not be digestible or relevant and worsen physician burnout and risk AI adoption failures.
More engaging methods for beginning physician AI education, include in-context methods such as just-in-time learning, case-based instruction embedded into physician practice, and peer-led learning by focusing on physicians that exhibit an affinity for technology. These physicians then go on to be AI champions and super-users for a healthcare system.
A tool to implement in-context AI education is Model Cards. Model Cards are one-page instructional sheets that do not aim to be an exhaustive manual; instead, it offers a concise compilation of vital information, guiding clinicians on when and how to use an AI model, and more importantly, when not to (4). A physician familiar with Model Cards can pick up an AI tool and immediately glean a basic understanding of how best to use it.
An example of a Model Card is a sepsis prediction Model Card that warns clinicians not to venture beyond the realms of its validation and anticipates where there may be inappropriate use cases (5), Figure 1.
## Figure 1: Example of Model Card taken from Sendak et al.
## Proposed Objectives
To develop a physician workforce who are comfortable with using AI products, we propose a pilot educational program for training Residents and/or Banner Physicians who anticipate AI deployment in their service line on: 1. How do we evaluate a new AI product being implemented in practice; 2. What does an AI Model Card look like; 3. How do you interpret an AI Model card; 4. If a product does not have a Model Card, how do you create one?
## Description
The educational intervention will be a collaborative effort between Banner Medical Informatics, University of Arizona College of Medicine-Phoenix, University of Arizona Data Science Institute, and Banner Service Lines. Early adopter providers in the Banner Internal Medicine Residency and Radiology Program have agreed to be part of the development team. An unstructured interview will be held with the service line representative to explore high priority goals for AI education in context of Model Cards and appropriate time allocation (range: 1-4 one-hour sessions). Medical Informatics and UA Data Science lab will then construct and deliver a model card curriculum based on service line priorities and time constraints. The Model Cards used will simulate real life AI models in use service relevant topics such as length of stay, sepsis, image analysis, etc. In addition, if there is a request from Banner Health for an anticipated deployment of a clinical AI project during the pilot period, we will assist in developing a Model Card and training for the deployment.
## Metrics/Objectives
The first deliverable will be the summarized service line specific goals and education time constraints for AI education. The second deliverable will be end-user AI comfort level fold change for utilizing AI and model cards following AI educational intervention. This will be measured by pre, and post 5-minute surveys done concurrent with the education session with questions based on prior surveys (6) and working group consensus. The third deliverable will be recordings of sessions for rebroadcast. Participants may be asked to volunteer to assist with future training of Banner providers.
This project does not request specific Banner resources, other than participant time. It leverages its status as a UA Data Science Institute Data Science Lab designated project for dedicated expertise. It will take advantage of open source or free license services for AI model cards development from big tech such as Wikimedia, Alphabet, and Meta, as well as software development capabilities of the project lead and Data Science Lab.
The value of this pilot project is AI education of a cohort of physicians, iterating future AI education based on end user and instructor feedback, and moving towards scalable AI education for physicians in Banner. The long-term goal of this work is to develop process automation strategies to build a sustainable Healthcare AI Model Card Education Toolkit that is scalable and contains modular ability to customize per end user needs. The pilot project will provide data for grant and institutional funding requests for the iteration and optimization of AI education for physicians at Banner for the purpose of improving AI deployment outcomes and reducing the risk of high-cost AI failures.
## References
1. Sahni, N. R. (McKinsey & Company, 280 Congress Street, Suite 1100, Boston, MA 02210), Stein, G. (McKinsey & Company, 1 Deforest Avenue, Suite 300, Summit, NJ 07901), Zemmel, R. (McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York City, NY 10007), & Cutler, D. (Department of Economics, Harvard University, 1805 Cambridge Street, Cambridge, MA 02138, and NBER). (2023, January). The potential impact of artificial intelligence on healthcare spending. Paper prepared for the NBER Economics of Artificial Intelligence Conference, September 2022.
2. The White House. (2023, October 30). FACT SHEET: President Biden issues executive order on safe, secure, and trustworthy artificial intelligence. Retrieved from https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/
3. World Health Organization. (2023). Regulatory considerations on artificial intelligence for health. World Health Organization. https://iris.who.int/handle/10665/373421. License: CC BY-NC-SA 3.0 IGO
4. de Biase, A., Sourlos, N., & van Ooijen, P. M. A. (2022). Standardization of Artificial Intelligence Development in Radiotherapy. Seminars in radiation oncology, 32(4), 415–420. https://doi.org/10.1016/j.semradonc.2022.06.010
5. Sendak, M.P., Gao, M., Brajer, N. et al. Presenting machine learning model information to clinical end users with model facts labels. NPJ Digit. Med. 3, 41 (2020). https://doi.org/10.1038/s41746-020-0253-3
6. Chen, M., Zhang, B., Cai, Z., Seery, S., Gonzalez, M. J., Ali, N. M., Ren, R., Qiao, Y., Xue, P., & Jiang, Y. (2022). Acceptance of clinical artificial intelligence among physicians and medical students: A systematic review with cross-sectional survey. Frontiers in medicine, 9, 990604. https://doi.org/10.3389/fmed.2022.990604
## Service Specific Contextualization for Model Card Education
### Survey Based Approach of Physician Leaders, Residency Program Leaders, Medical Student Leaders.
## Sequenced Steps for Pilot
Socialize Education of AI using Model Cards and generate a list of physician leaders who would participate. Identify best way to engage these leaders. (We don’t need all of them to participate).
Survey clinical use cases for AI in medicine to make mock model cards
Decide which model cards would be appropriate for specialty specific training
Match use cases to model card
Create a demo/mock model card
Decide which tools to use
Decide which content to include
Generate Physician Leader Structured Interview and Analyze Results. (Deliverable 1)
Create Educational Content for Model Card education
Schedule and Deploy Model Card Education and Analyze Results. (Deliverable 2)
## Physician Leader Structured Interview (30 minutes) (Deliverable 1)
Describe the AI Physician Education Problem and Model Card training as a potential solution?
Do Physician Leaders think Model Cards contain everything that a physician needs to deploy AI?
Do Physician leaders think Model Cards training will increase physician trust in AI deployment?
Understand what Physician leaders think are the most important components of model cards where physicians need the most education.
How much time should be dedicated to Model Card training (1-4 hours)
Would they support their Physicians receiving model card training in their training program or service line?
## Model Card Training Objectives (1-4 hours) (Deliverable 2)
Do Model Cards contain everything that a physician needs to deploy AI?
Does Model Cards training increase physician trust in AI deployment?
Would they request a model card in the future for an AI deployment?
Was the amount of time dedicated to model card training appropriate? (Too short, too long, just right) and which parts of Model Card Training should be shortened or lengthened?
Would Model Card Training work if it was offered online or as a recorded e-learning: Why or why not.
Would the physician feel comfortable clarifying model card content with tools including using LLM prompts to interrogate model cards including recognizing LLM errors, or calling an informatics consult (which of these options would be most helpful).