Artificial intelligence (AI) is revolutionizing the research field, creating opportunities for breakthroughs that can greatly enhance human health and behavior. AI includes chatbots, algorithms, deep learning, machine learning/intelligence, predictive analytics, and more.

Acknowledging both the opportunities and challenges presented by AI, UW-Madison has created guidance to assist the research community in navigating this evolving field. This guidance aims to share best practices for conducting human subjects research, including definitions for artificial intelligence and machine learning, as well as requirements for IRB review and resources. Please consider the following before submitting a human subject research application.


AI Definitions

Artificial Intelligence: refers to actions that mimic human intelligence displayed by machines and to the field of study focused on this type of intelligence. AI consists of computer programs that are typically built to adaptively update and enhance their own performance over time. They are used to process, analyze, and recognize patterns in large datasets, and they use those patterns to get better at completing tasks or solving problems.

Generative AI: is a system of algorithms or computer processes that can create novel output in text, images or other media based on user prompts. These systems are created by programmers who train them on large sets of data. The AI learns by finding patterns in the data and can then provide novel outputs to users’ queries based on its findings. Examples include ChatGPT, Bing, and Microsoft Co-Pilot

Machine Learning: Machine learning is a type of artificial intelligence that involves sophisticated algorithms which can be trained to sort information, identify patterns, and make predictions within large data sets without being explicitly programmed to do so.

~The above definitions were taken from the National Library of Medicine~

Traditional AI: relies on pre-programmed rules and algorithms to perform specific tasks. It’s good at solving well-defined problems and repetitive tasks, but unlike Generative AI, it can’t adapt to new situations or create new ideas. Examples include voice assistants such as Siri and Alexa or Google’s search engine.

AI & IRB Review

If the study requires IRB review, there is essential information the IRB requires to conduct its review. Please see HRP-310 – Human Subjects Research Determination and consult with the IRB office to determine if IRB review is needed.

The following information should be included in the IRB application or protocol. This is not an exhaustive list. Refer to HRP-337 – Artificial Intelligence (AI) / Machine Learning (ML) Technologies for additional information to include in the IRB application or protocol. For health care related AI/ML software functions, see also HRP- 307 – WORKSHEET – Devices.

  • The purpose of the technology. This should be explained in an understandable and transparent manner.
  • The CURRENT stage of the technology as used in the study under review. Future use of the AI technology, if relevant, should also be described. However, it should be clear that research on future uses will only be conducted after a change of protocol or a new application is submitted. This is especially important in studies of health-related AI, as it affects regulatory determinations made by the IRB. For non-health related AI/ML software functions, a non-protocol-based application can be used regardless of which stage the technology is in. For healthcare related AI/ML software functions, please see HRP- 307 – WORKSHEET – Devices for which application type is most appropriate at each stage of the technology.
    • Stage 1: Training / proof of concept. Training of the model. May include secondary data analysis.
      • Verifies algorithm correctly processes input data, output predictions align with expectations, and/or valid association between the output and the target of the output.
    • Stage 2: Pilot testing / validation. Testing of the model. May include secondary data analysis.
      • Early feasibility, preliminary safety and performance.
      • Determines if software function meets technical requirements, generates evidence that output is technically what was expected for intended use and/or will likely achieve meaningful outcomes through predictable and reliable use.
    • Stage 3: Real-world testing and deployment. Establishes/verifies safety, device performance, benefits, effectiveness. May be deployed in a live environment.
      • May impact research participants, involve interaction or intervention with subjects, or affect patient care by exposing healthcare provider to outputs (e.g., deployment in Electronic Health Record).
  • If the study involves interaction or intervention with AI, describe AI’s role in the interaction or intervention along with the following:
    • A description of the data that the AI technology will be designed to collect;
    • Documentation of the parameters or limits placed on the AI tool for the interaction or intervention, data collection, and (if applicable) data analysis;
    • Scripts or texts of instructions that will be read or provided to participants as part of the interaction or intervention with the AI technology; and
    • A plan to monitor the safety of participants and their data during and after the interaction or intervention.
  • How the confidentiality of the data to which the AI technology has access to will be protected. Communicate to the IRB and to participants via the consent form the terms of use of the AI technology related to confidentiality. If there is no guarantee that the information provided will remain confidential, the IRB and participants must be told.
  • The AI technology’s ability to create indirect identifiers within a dataset. Describe how the study team may limit an AI technology’s access to demographic information and other data points that could potentially reidentify an individual (e.g., combinations of data points that could identify an individual) and be prepared for the possibility that the IRB will require that investigators place limitations and parameters on the AI technology’s data use.
  • How privacy will be maintained.
  • Risks and benefits of the AI technology, if applicable, and how those risks will be mitigated, including approaches that will be used to assess and mitigate bias.

Some of the information in this section has been adapted, with permission, from the University of Tennessee-Knoxville and the University of Minnesota- Twin Cities guidance on AI.

Additional Resources