Guidance on Use of AI for the IRB Submission Process
Purpose: This guidance outlines the important considerations for the use of Generative AI, such as ChatGPT and Google Bard, in completing the Cayuse Human Ethics submission forms. The primary objective of this guidance is to ensure the ethical and transparent integration of Generative AI, safeguarding the integrity and accuracy of the protocol submission and increasing efficiency of the review process. This guidance aims to strike a balance between fostering innovation and ensuring the ethical and responsible use of Generative AI in the pursuit of research and creative excellence at 蜜柚视频. Researchers are encouraged to exercise transparency and due diligence when employing Generative AI in submitting human subjects research for ORC/IRB review.
Scope: All submissions to the Office of Research Compliance (ORC)/IRB via Cayuse.
Concerns with Generative AI in Preparing Submissions
Generative AI presents certain challenges and concerns when used to describe aspects of a planned research protocol submitted for ORC/IRB review, including but not limited to:
Risk of plagiarism: Generative AI may inadvertently produce text containing plagiarized content due to its extensive training corpus and obscured mappings within the language model, making plagiarism difficult to detect.
Factual accuracy: Generative AI can generate text containing inaccurate or fictitious information. Submissions may unknowingly incorporate false information when using responses generated by AI. Review the Terms of Use for your AI tool for cautionary statements about the potential for factually inaccurate outputs.
Incorrect interpretations of relevant regulations: Asking Generative AI a general question about human subjects research requirements may return an inaccurate response.
Citation deficiency: Generative AI often fails to provide proper citations for referenced sources, as the model's learned content mappings are not explicitly revealed.
Data incorporation: Generative AI may assimilate novel ideas from other research projects into its database, potentially using such information to generate content for other users.
Conflicting human subjects research policies and procedures at different institutions: Generative AI may return a response derived from one or several websites of various institutions; but institutional policies and procedures vary, and the response provided by Generative AI may well differ from OHIO policies and procedures.
Variations in different research methods, sites, and populations: A common phrase in the ORC is, 鈥渋t depends.鈥 Requirements for recruitment, consent, and confidentiality will depend on the target population, the study design, and U.S. or other regulations that may apply to the research. Generative AI is unlikely to return an adequate or accurate response to questions about specific research.
Lack of specificity: Somewhat related to the previous point, use of Generative AI to answer certain questions tends to provide a very general response rather than one specific to your project. For example, there is a Cayuse question that asks about how and where informed consent will be obtained, by whom, and how potential for coercion will be prevented. We have seen Generative AI responses to this question that address why these things are important, but do not explain the planned consent process for your study.
Inconsistency in form responses: Use of Generative AI to answer individual questions in the Cayuse submission form can lead to inconsistencies in the submission as a whole. For example, using Generative AI to write a response to the question in the Cayuse form asking for a description of study procedures and then to write responses to subsequent questions in the form often leads to listing different study procedures in different places in the form.
Consent forms: Use of Generative AI to write consent forms often produces documents that 1) do not include all the required elements of informed consent; 2) do not include all the required information found in OHIO consent templates; and/or 3) include incorrect information for OHIO IRB protocols. For example, using Generative AI to write consent forms often prompts users to add contact information for 鈥渢he 蜜柚视频 IRB鈥 if people have questions or concerns about their rights as research participants, but at OHIO, participants contact the Director of Research Compliance about these matters and not the IRB directly.
Impacts on time to approval: For the reasons outlined above, submissions relying on the use of Generative AI may contain incorrect or inconsistent information. This will result in more revision comments and possibly additional rounds of revisions. ORC recommends that researchers review and verify the information included in Generative AI outputs before incorporating the information into your submission.
Allowable Uses for Generative AI in Protocol Preparation
蜜柚视频 acknowledges the potential benefits and challenges of employing Generative AI in providing answers to the questions in the Cayuse Human Ethics forms.
No limitation: There are currently no limitations on the use of Generative AI in the preparation of submissions to the ORC/IRB. Researchers are free to utilize Generative AI tools as they deem appropriate, in accordance with established OHIO Office of Information Security standards.
Equal evaluation: All submissions, whether or not they incorporate Generative AI, will be reviewed via the same criteria outlined in federal regulations and OHIO policies and procedures.