AI in Veterinary Medicine === [toc] **Table** --- **Choose & Define A Problem :** Papers that concentrate on defining a problem for application of AI. **Identify & Collect Data :** Papers on data collection, processing methods, and validation procedures. **Ethical Implications :** Papers addressing the ethical dimensions, privacy concerns, and fairness aspects associated with AI applications. **Model Development :** Papers on AI model development and categories of AI. **Model Evaluation :** Papers discussing methodologies for the thorough evaluation of AI systems **Imlementation:** Papers outlining strategies for the successful implementation of AI **AI Human Behaviour:** Papers focusing on AI- human interactions and behaviour. **General:** General review papers | Title | Author | Year | Topic | Phase | Publication | Link | |--------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------|------|----------------------------|----------------------------|---------------------------------------------------|-----------------------------------------------------------------------------------------------| | The potential for artificial intelligence<br>in healthcare | Davenport et al. | 2019 | AI in Healthcare | General | Future Healthcare Journal | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6616181/ | | Artificial intelligence in healthcare | Yu et al. | 2018 | AI in Healthcare | General | Nature Biomedical Engineering | https://www.nature.com/articles/s41551-018-0305-z | | AI in health and medicine | Rajpurkar et al. | 2022 | AI in Healthcare | General | Nature Medicine | https://www.nature.com/articles/s41591-021-01614-0 | | Developing, implementing and governing <br>artificial intelligence in medicine: <br>a step-by-step approach to prevent <br>an artificial intelligence winter | Van de Sande et al. | 2022 | AI in Healthcare | General / implementing | BMJ Health & Care Informatics | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8860016/ | | Artificial intelligence in veterinary medicine | Basran et al. | 2022 | AI in Vetmed | General | AVMA | https://avmajournals.avma.org/view/journals/javma/260/8/javma.22.03.0093.xml | | The unmet potential of artificial<br> intelligence in veterinary medicine | Basran et al. | 2022 | AI in Vetmed | General | AVMA | https://avmajournals.avma.org/view/journals/ajvr/83/5/ajvr.22.03.0038.xml | | Who Goes First? Influences of Human-AI<br>Workflow on Decision Making in Clinical Imaging | Fogliato | 2022 | AI-human interaction | AI human behaviour (Vets) | Association for Computing Machinery | https://dl.acm.org/doi/10.1145/3531146.3533193 | | Do no harm: a roadmap for <br>responsible machine learning<br>for health care | Wiens et al. | 2019 | ML for Healthcare | Define a problem / General | Nature Medicine | https://www.nature.com/articles/s41591-019-0548-6 | | SMART on FHIR: a standards-based, <br>interoperable apps platform for <br>electronic health records | Mandel et al. | 2016 | EHR | Identify & Collect Data | American Medical Informatics <br>Association | https://doi.org/10.1093/jamia/ocv189 | | Data Quality Considerations for <br>Big Data and Machine Learning: <br>Going Beyond Data Cleaning and <br>Transformations | Gudivada et al. | 2017 | Big data / ML | Identify & Collect Data | International Journal on Advances <br>in Software | http://personales.upv.es/thinkmind/dl/journals/soft/soft_v10_n12_2017/soft_v10_n12_2017_1.pdf | | A Survey of Data Quality Requirements<br>That Matter in ML Development Pipelines | Priestley et al. | 2023 | Data / ML | Identify & Collect Data | Journal of Data and Information Quality | https://dl.acm.org/doi/10.1145/3592616 | | The FAIR Guiding Principles for scientific<br>data management and stewardship | Wilkinson et al. | 2016 | Data management | Identify & Collect Data | Scientific Data | https://www.nature.com/articles/sdata201618 | | Explainability for artificial intelligence<br>in healthcare: a multidisciplinary perspective | Amann et al. | 2020 | XAI in Healthcare | Model Development | BMC Medical Informatics and <br>Decision Making | https://doi.org/10.1186/s12911-020-01332-6 | | Automated machine learning: Review of the <br>state-of-the-art and opportunities for healthcare | Waring et al. | 2020 | AutoML in Healthcare | Model Development | Artificial Intelligence in Medicine | https://www.sciencedirect.com/science/article/pii/S0933365719310437 | | Taking Human out of Learning Applications: <br>A Survey on Automated Machine Learning | Yao et al. | 2019 | AutoML | Model Development | NA | http://arxiv.org/abs/1810.13306 | | Applications of continual learning machine<br>learning in clinical practice | Lee et al. | 2020 | Continual ML in Healthcare | Model Development | The Lancet. Digital health | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8259323/ | | Trust in Automation: Designing <br>for Appropriate Reliance | Lee at al. | 2004 | Trustworthy AI | Ethical Implications | Human Factors | https://journals.sagepub.com/doi/abs/10.1518/hfes.46.1.50_30392 | | Ethics of using artificial intelligence<br>(AI) in veterinary medicine | Coghlan et al. | 2023 | Ethics of AI in Vetmed | Ethical Implications | AI & SOCIETY | https://doi.org/10.1007/s00146-023-01686-1 | | Principles of veterinary medical ethics of <br>the AVMA \| American Veterinary Medical Association | AVMA | 2016 | Ethics of AI in Vetmed | Ethical Implications | AVMA | https://www.avma.org/resources-tools/avma-policies/principles-veterinary-medical-ethics-avma | | Methodologic Guide for Evaluating Clinical <br>Performance and Effect of Artificial Intelligence<br>Technology for Medical Diagnosis and Prediction | Park et al. | 2018 | AI eval in Healthcare | Model Evalutaion | Radiology | https://pubs.rsna.org/doi/full/10.1148/radiol.2017171920 | **Notes** --- - ## Papers to tackle difficult questions [Google Doc](https://docs.google.com/document/d/1FFDEHwRaC_YpPzMQ6-BD3NwdCLX4PPUPIYVfEIxm1E4/) - ### Methodologic Guide for Evaluating Clinical Performance and Effect of Artificial Intelligence Technology for Medical Diagnosis and Prediction [parkMethodologicGuideEvaluating2018](/Pi5nzouFRt6reIJRyUnRVQ) - ### Do no harm: a roadmap for responsible machine learning for health care [wiensNoHarmRoadmap2019](/q0K1XCGkTBequnBVL4x9oA) - ### Artificial intelligence in veterinary diagnostic imaging: A literature review [link](https://doi.org/10.1111/vru.13163) - ### Mars papers: [PapersFromMars](/TjMKHqLYQ0m9XmTYHpvGzg) <br /> <br /> ## Explainability for artificial intelligence in healthcare: A multidisciplinary perspective <mark style="background-color: #ffd400">Quote</mark> > explainability may be a key driver for the uptake of AI-driven CDSS in clinical practice, as trust in these systems is not yet established [22, 23]. Here, it is important to note that any use of AI-based CDSS may influence a physician in reaching a decision. It will, therefore, be of critical importance to establish transparent documentation on how recommendations were derived. <mark style="background-color: #ffd400">Quote</mark> > explainable AI decision support systems may not only contribute to patients feeling more knowledgeable and better informed but could also promote more accurate risk perceptions [34, 35]. This may, in turn, boost patients’ motivation to engage in shared decisionmaking and to act upon risk-relevant information <mark style="background-color: #ffd400">Quote</mark> > At times, it might be tempting to prioritize accuracy and simply refrain from investing resources into developing explainable AI. Yet to ensure that AI-powered decision support systems realize their potential, developers, and clinicians need to be attentive to the potential flaws and limitations of these new tools. Thus, also from the justice perspective, explainability becomes an ethical prerequisite for the development and application of AI-based clinical decision support. ## Artificial intelligence in veterinary medicine <mark style="background-color: #ffd400">Quote</mark> > it is not necessary for the average veterinary practitioner to have a working knowledge of computer programming to effectively use and implement AI <mark style="background-color: #ffd400">Quote</mark> > a baseline level of knowledge is needed to understand the power and pitfalls of AI. <mark style="background-color: #ffd400">Quote</mark> > artificial narrow intelligence. <mark style="background-color: #ffd400">Quote</mark> > Without veterinarians taking an active role in asking these questions when they are offered an AI-based solution, there is a risk of prioritizing profits over clinical outcomes and the well-being of both veterinary professionals and their patients. The goal of AI should be to improve veterinary practice, animal health outcomes, patient quality of life, and the lives of veterinarians. This requires well-thoughtout use cases and active veterinary stakeholders. <mark style="background-color: #ffd400">Quote</mark> > Data must be labeled in most cases, representing a role for veterinarians that should not be underplayed <mark style="background-color: #ffd400">Quote</mark> > Bias is an additional consideration of concern in AI. Bias may arise from training sets that have skewed breed or geographic distributions or imposed by the method of curating the ground truth. <mark style="background-color: #ffd400">Quote</mark> > Big data approaches in health care can be data rich but information poor <mark style="background-color: #ffd400">Quote</mark> > It is our opinion that open data is good for veterinary medicine. This promotes the ideal goals of AI rather than monetization. However, it must be acknowledged that there is a cost associated with data storage and sharing. This cost is likely to fall on corporations and organizations that will likely retain at least some of the rights to the data. <mark style="background-color: #ffd400">Quote</mark> > Partnerships between academic institutions, granting agencies, and professional organizations may be necessary to facilitate open, curated, and good veterinary data for the community. <mark style="background-color: #ffd400">Quote</mark> > who owns the data <mark style="background-color: #ffd400">Quote</mark> > confidentiality and security as it relates to patient information. <mark style="background-color: #ffd400">Quote</mark> > no regulatory framework for AI in veterinary medicin <mark style="background-color: #ffd400">Quote</mark> > Some critical responsibilities include establishing use cases, fiscal responsibility when AI technologies are purchased, sufficient infrastructure (software or hardware) and resource support, establishing good data management principles, thorough acceptance testing and clinical deployment including training and education, and comprehensive qual <mark style="background-color: #ffd400">Quote</mark> > Veterinary professionals can promote acceptance with transparency and client education ## The unmet potential of artificial intelligence in veterinary medicine. <mark style="background-color: #ffd400">Quote</mark> > Research on the use of NLP and AI with medical images to detect, predict, and classify disease will continue to grow alongside improvements in ML methods. Various -omics profiling, such as proteomics, metabolomics, genomics, transcriptomics, and dosiomics, are now feasible and attainable in veterinary medicine. When various -omics data are combined, data sets become larger and more challenging to process.53 Analysis of multiomics data will require sophisticated data reduction and feature selection techniques, but it has the potential to offer improved diagnostics and more effective patientspecific treatment strategies <mark style="background-color: #ffd400">Quote</mark> > Translational research opportunities also exist in veterinary medicine, particularly within the One Health paradigm of human and animal care.56 Translational research is based on multidisciplinary collaborations among laboratory and clinical researchers and different communities in pursuit of more effective treatments and best practices. <mark style="background-color: #ffd400">Quote</mark> > Addressing these and other data challenges in veterinary medicine requires a good working knowledge of database management and techniques to manage disparate data types. <mark style="background-color: #ffd400">Quote</mark> > the environment in which the model is developed is measurably different than the environment in which the model is deployed <mark style="background-color: #ffd400">Quote</mark> > A module on the fundamentals of AI in veterinary medicine should become the norm in veterinary curricula, not only to empower future practitioners on the utility of AI in their everyday practice but also to provide them with the expertise to understand the limitations of AI and adopt best practices when using it. <mark style="background-color: #ffd400">Quote</mark> > AI-based decision support tools permit an opportunity to work smarte <mark style="background-color: #ffd400">Quote</mark> > Artificial intelligence projects in healthcare ## Ethics of using artificial intelligence (AI) in veterinary medicine <mark style="background-color: #ffd400">Quote</mark> > AI ethics also borrows from medical ethics (Mittelstadt 2019) and its four widely accepted bioethical principles: nonmaleficence (do no harm), beneficence (do good), respect for autonomy (respect a person’s ability to act on their own values and preferences), and justice (e.g. ensure fair distribution of medical resources) (Beauchamp and Childress 2001). > <mark style="background-color: #ffd400">Quote</mark> > We strongly recommend that the veterinary profession not allow AI developers, AI companies and insurance providers to dictate the design and uses of AI without proper consideration of relevant concerns, risks and ethical values. Awareness of commercial overhyping of AI and potential exploitation of animals and clients would be wise. Ongoing conversations may need to occur between practitioners, veterinary organisations, insurance companies, AI vendors and AI experts that address the ethical issues we identified (Table 1). ## The potential for artificial intelligence in healthcare. <mark style="background-color: #ffd400">Quote</mark> > The greatest challenge to AI in these healthcare domains is not whether the technologies will be capable enough to be useful, but rather ensuring their adoption in daily clinical practice. For widespread adoption to take place, AI systems must be approved by regulators, integrated with EHR systems, standardised to a sufficient degree that similar products work in a similar fashion, taught to clinicians, paid for by public or private payer organisations and updated over time in the field. <mark style="background-color: #ffd400">Quote</mark> > we expect to see limited use of AI in clinical practice within 5 years and more extensive use within 10. ## Who Goes First? Influences of Human-AI Workflow on Decision Making in Clinical Imaging. <mark style="background-color: #ffd400">Quote</mark> > In the one-step workflow, participants were asked to identify radiographic findings given AI inferences and X-ray images at the same time. <mark style="background-color: #ffd400">Quote</mark> > Our findings revealed that radiologists’ diagnoses were more aligned with AI advice when it was shown immediately than in workflows where AI inferences were displayed after the radiologist had rendered a provisional assessment. The alignment, however, was similar across workflows for findings that were considered to be critical for the animal. Diagnoses made in the one-step workflow were characterized by marginal gains in diagnostic performance and higher levels of inter-rater reliability compared to those in the two-step workflow. ## AI in health and medicine. <mark style="background-color: #ffd400">Quote</mark> > Decentralizing data storage is one way to reduce the potential damage of any individual hack or data leak. The process of federated learning facilitates such decentralization while also making it easier to collaborate across institutions without complicated data-sharing agreements <mark style="background-color: #ffd400">Quote</mark> > Bias can creep in due to other design choices, such as the choice of target label. For example, a risk-assessment algorithm used to guide clinical decision-making for 200 million patients was found to give racially biased predictions, such that white patients assigned a certain predicted risk score tended to be healthier than Black patients with the same score. This bias was due in large part to the original labels used in training. The system was trained to predict future healthcare costs, but because Black patients had historically received less expensive care than white patients due to existing systematic biases, the system reproduced those racial biases in its predictions114.