The ad hoc panel’s primary goal is to offer high-level advice that supports the Canadian federal research funding agencies in understanding the context and implications of the use of generative artificial intelligence (AI) in the funding of research that has value to society.
The panel recognizes the existence of broad societal issues around the creation, and use, of generative AI as it currently exists in 2023, ranging from contributions to climate change to the perpetuation of bias.
We support a commitment from the funding agencies to fairness, transparency and equity in how generative AI tools are employed.
Definition of generative AI
A practical working definition of what qualifies as “generative AI” is foundational to this advice.
The panel acknowledges that funders residing within the federal civil service may be bound by intramural definitions of generative AIFootnote 1 and, further, that impending legislation (e.g., Bill C-27) and the associated regulatory frameworks may create definitions binding all Canadian funders.
For the purposes of this document, the working group itself defines generative AI as systems that have the following four properties:
- They present a straightforward, often conversational, interface that makes deploying the power of the system accessible to a broad range of non-expert users.
- They intrinsically enable iterative design and improvement processes.
- They make available information extracted from enormousFootnote 2 amounts of data, by systems using enormous amounts of compute.
- The output of the system approaches a level of sophistication that may cause non-experts to erroneously identify the output as having been human created.
Suggestions for the role of generative AI in the preparation of grants
The panel accepts that a blanket ban on the use of generative AI in grant writing is impractical and ultimately unenforceable. Our recommendations here are framed with the highest respect accorded to the applicant, and the applicant’s judgement and honesty; with this autonomy comes accountability.
We suggest that the funding agencies articulate clear expectations for applicants, such as:
- The named applicant is ultimately personally accountable for the complete contents of the application.
- The applicant should “think through the core research problem themselves.” This includes the development of the primary questions and methodologies.
- The applicant is responsible for framing their work in the broader context of their field, and the research endeavour at large.
- The applicant should be required to disclose the usage of generative AI in the preparation of a proposal. The panel advises that the exact specification of what is to be disclosed is a parameter that requires investigation but suggests a simple binary declaration as a starting point: applicants should be asked to declare (yes/no) if generative AI was used, for any purpose, in the preparation of the grantFootnote 3. The panel recommends that this approach should be reviewed and revised at least annually.
- There will also be a need to inform grant applicants why this question is being asked, assure that there is no negative implication for application review, and specifically if this information will be accessible to reviewers and the committee.
- The panel further suggests the funders consider the value of collecting additional data on the use of AI in grant application writing (e.g., Was AI used solely for copyediting? Was AI used for ideation/brainstorming?). This might be collected on the application form, or as part of a separate follow-up survey; the panel acknowledges there are many complex tradeoffs in these decisions, which are best left to the funding agencies.
Suggestions for the role of generative AI in the evaluation of grant applications
The panel notes that all other funding agencies that have issued policy on this matter have chosen an outright ban (though these remain few in number at this point). We feel that a more nuanced approach merits consideration given both the challenge of enforcing a ban and the opportunities for positive system-wide improvements.
The primary concern in the review stage is that of privacy, confidentiality and data security. Cloud-based generative AI tools necessitate the transmission of the content of the application to the tool, and often thus a loss of custody of the intellectual property contained therein. The panel advises that in order to respect the intellectual property of applicants in a manner that is consistent with established practice, the funders disallow reviewers the unsanctioned use of generative AI in grant application review. This recommendation is made in light of current technological limitations but could be revisited in a scenario where a generative AI product resides on a system under the control of, and subject to the same policies and agreements as, the funder’s other IT systems. The panel acknowledges that the development and deployment of in-house AI hosting infrastructureFootnote 4 is a hugely nontrivial undertaking, but nonetheless sees merit in considering this possibility in future planning.
Even in a context with secure, private, funder-hosted infrastructure, more data is needed to make informed decisions. We encourage the funding agencies to allocate resources to experimentation, including randomized controlled trials and careful policy development. We also note that the use of generative AI at the pre-review stage (e.g., using GPT-4 to help find qualified reviewers without resorting to the traditional, and highly biased, mechanism of “asking the usual suspects”) could be investigated as a lower-stakes, but potentially still high-impact, opportunity.
The panel feels strongly that it will be important for the funding agencies to acknowledge the differences in culture, expectations and content of grant applications across disciplines. We suggest that the best mechanism for dealing with this heterogeneity is to, as much as possible, delegate authority and respect the expert advice of panels and reviewers.
Education on bias and responsible use of AI
The panel identified a potential role for funders in the development of a training tool for applicants (and, potentially, for reviewers) focusing on AI bias and responsible use. Focusing on responsible, and effective, applications of AI in grantcraft would also serve to create a more level playing field across applicants with different exposures to generative AI technology.
Responsible conduct of research
The panel noted the large overlap between many of our conversations and the Canadian mandates for the responsible conduct of research. The panel would be grateful for the opportunity to further engage with the Secretariat on Responsible Conduct of Research (SRCR) to support alignment across the agencies and SRCR on this matter.
Closing thoughts
The panel advises that the federal research funding agencies set up a high cadence of review for policy, and practices, related to generative AI. The speed of development in this field remains rapid, and the capabilities of next month’s systems may challenge assumptions intrinsic in today’s best advice.
The panel suggests that funders additionally continue to take advantage of ad hoc advisory panels, with representation from a diverse set of experts, to provide timely, but informal, input and guidance.
More information on the composition of the ad hoc panel can be found here.