If the study requires IRB review, there is essential information the IRB requires to conduct its review. Please see HRP-310 – Human Subjects Research Determination and consult with the IRB office to determine if IRB review is needed.

The following information should be included in the IRB application or protocol. This is not an exhaustive list. Refer to HRP-337 – Artificial Intelligence (AI) / Machine Learning (ML) Technologies for additional information to include in the IRB application or protocol. For health care related AI/ML software functions, see also HRP- 307 – WORKSHEET – Devices.

  • The purpose of the technology. This should be explained in an understandable and transparent manner.
  • The CURRENT stage of the technology as used in the study under review. Future use of the AI technology, if relevant, should also be described. However, it should be clear that research on future uses will only be conducted after a change of protocol or a new application is submitted. This is especially important in studies of health-related AI, as it affects regulatory determinations made by the IRB. For non-health related AI/ML software functions, a non-protocol-based application can be used regardless of which stage the technology is in. For healthcare related AI/ML software functions, please see HRP- 307 – WORKSHEET – Devices for which application type is most appropriate at each stage of the technology.
    • Stage 1: Training / proof of concept. Training of the model. May include secondary data analysis.
      • Verifies algorithm correctly processes input data, output predictions align with expectations, and/or valid association between the output and the target of the output.
    • Stage 2: Pilot testing / validation. Testing of the model. May include secondary data analysis.
      • Early feasibility, preliminary safety and performance.
      • Determines if software function meets technical requirements, generates evidence that output is technically what was expected for intended use and/or will likely achieve meaningful outcomes through predictable and reliable use.
    • Stage 3: Real-world testing and deployment. Establishes/verifies safety, device performance, benefits, effectiveness. May be deployed in a live environment.
      • May impact research participants, involve interaction or intervention with subjects, or affect patient care by exposing healthcare provider to outputs (e.g., deployment in Electronic Health Record).
  • If the study involves interaction or intervention with AI, describe AI’s role in the interaction or intervention along with the following:
    • A description of the data that the AI technology will be designed to collect;
    • Documentation of the parameters or limits placed on the AI tool for the interaction or intervention, data collection, and (if applicable) data analysis;
    • Scripts or texts of instructions that will be read or provided to participants as part of the interaction or intervention with the AI technology; and
    • A plan to monitor the safety of participants and their data during and after the interaction or intervention.
  • How the confidentiality of the data to which the AI technology has access to will be protected. Communicate to the IRB and to participants via the consent form the terms of use of the AI technology related to confidentiality. If there is no guarantee that the information provided will remain confidential, the IRB and participants must be told.
  • The AI technology’s ability to create indirect identifiers within a dataset. Describe how the study team may limit an AI technology’s access to demographic information and other data points that could potentially reidentify an individual (e.g., combinations of data points that could identify an individual) and be prepared for the possibility that the IRB will require that investigators place limitations and parameters on the AI technology’s data use.
  • How privacy will be maintained.
  • Risks and benefits of the AI technology, if applicable, and how those risks will be mitigated, including approaches that will be used to assess and mitigate bias.

Some of the information in this section has been adapted, with permission, from the University of Tennessee-Knoxville and the University of Minnesota- Twin Cities guidance on AI.

Additional Resources