Queen’s AI (artificial intelligence) governance model provides a coordinated approach for guiding the responsible use of artificial intelligence across the university.
The structure connects academic and administrative decision makers, supports informed planning, and ensures that AI activity aligns with institutional values, strategy, and academic policy.
The AI Nexus provides advice and recommendations to the Queen's Digital Planning Committee, and where academic matters are implicated, to Senate subcommittees.
Role of the Queen’s Digital Planning Committee
The Queen’s Digital Planning Committee (QDPC) oversees how GenAI will be integrated and used at the university and how risks will be reduced or mitigated. QDPC ensures that AI deployment aligns with organizational values and goals. Within its mandate, QDPC will:
- guide the development of policies, standards, guidelines, and controls that support responsible AI innovation.
- prioritize and monitor AI initiatives and projects, assessing them against other strategic needs.
- direct activities such as risk management, technical standards, and implementation planning.
QDPC serves as the central body for guiding digital strategy, digital integration, and university wide AI priorities.
Queen's AI Nexus
Purpose and Goals
The Queen’s Artificial Intelligence (AI) Nexus will act as a strategic advisory body, reporting to its three sponsors: the Queen’s Digital Planning Committee, the Senior Leadership Team, and Board of Trustees. It operates under the supervision of the Provost in service of its sponsors. Its primary focus is on AI, ensuring alignment with institutional objectives through collaboration with the Nexus panels on teaching and learning, research and research administration, and operations.
The Nexus will report to the three aforementioned sponsors. Membership will include: the special advisor on Generative AI and the chairs of the three Nexus Working Groups, plus assorted ex officio members as chosen by the Provost.
With a defined scope of activities inclusive of teaching and learning, scholarly activity, and operations related to generative AI, the AI Nexus shall though its actions and subcommittees:
-
Lead AI-related projects with institutional relevance for staff, faculty, researchers, and students.
-
Uphold transparent governance and ethical integration.
-
Align AI practices with university values, policies, and legal standards.
-
Foster engagement with the Queen’s community on AI issues.
-
Assess and respond to AI’s broad institutional impact, including ethical risks, financial considerations, and operational sustainability.
-
Support informed decision-making across faculties and administrative units with review of decisions, advise on approaches, and sharing ideas across campus and beyond.
-
Promote external and internal collaborations that enhance AI’s role in academia and the public sphere.
Meeting Frequency, Duration, and Modes of Communication
-
The subcommittees will meet monthly for its first year, re-evaluating the schedule thereafter.
-
Ad Hoc Working Groups may convene between meetings to maintain progress.
-
Communication will be facilitated via Microsoft Teams and direct correspondence with the Special Advisor on Generative AI.
-
The committee has an initial two-year term, with the potential for renewal based on evolving university needs.
AI Nexus Subcommittees
The Nexus Subcommittees operate under the AI Nexus’s direction, offering expertise in teaching and learning, research, and operations. Their primary function is to guide AI-related initiatives through evidence-based recommendations, constructive feedback, and strategic identification of emerging AI applications.
Each subcommittee consists of a multidisciplinary group. Participation is open to all, with an initial membership call concluded on September 1st, 2025.
Operations Subcommittee
Mandate: To guide AI’s integration into administrative and enterprise operations, ensuring ethical usage, security, privacy protection, and efficiency in university workflows in service of its mission and values.
Membership: Chaired by delegate(s) of the Vice-Principal Finance and Administration (For 2025-2026 it is Jeff Glassford and Leah Wales), with a diverse pool.
Key Initiatives:
-
Establish, review, and disseminate AI usage guidelines, as appropriate, in operations, procurement, and development of content, tools, apps, and resources.
-
While this subcommittee is focussed on operational and institutional initiatives, specific consideration to their impact on the people of Queen’s remains a priority
-
Develop protocols for privacy and security in AI technologies.
-
Support new and existing spaces at the university to guide and refine AI solutions proposed by university personnel within their various roles.
-
Develop procedures for piloting, assessing, and appraising the utility, sustainability, and ethical implications of new AI-powered technologies.
-
Consult on AI’s ethical implications in university functions.
-
Recommend professional development opportunities for AI literacy, responsible AI use, articulation of novel use cases, development of tools, and other opportunities for integration of AI into workflows and procedures.
-
Participate in cross-institutional discussions to refine best practices.
|
Name |
Representation |
|---|---|
|
Jeff Glassford (Co-Chair) |
IT Services |
|
Leah Wales (Co-Chair) |
Student Affairs |
|
Sarah Williams |
Human Resources |
|
Stuart McPherson |
Registrar |
|
Michael Polpette |
University Affairs |
|
Troy St John |
Business |
|
Catherine Stinson |
Computing/Philosophy |
|
Nadia Jagar |
Athletics/Recreation |
|
Stephen Hunt |
IT/Facilities/ Smith Engineering |
|
Jill McCreary |
Emergency Medicine/Clinical Operations |
|
Sandra Morden |
Research Librarian |
|
Jess Bolland |
Corporate Relation/ Smith Engineering |
|
Peter Vivieros |
Financial Services |
|
Nicole Hunniford |
Budget & Resource Planning |
|
Mike Ferguson |
Communications/Web and Digital Strategy/QHS |
|
Eleftherios Soleas |
Office of the Provost |
|
Diana Gilchrist |
Bader College |
Meeting Minutes
Minutes for Operations
September 30th, 2025
Present: Leah Wales, Jeff Glassford, Eleftherios Soleas, Diana Gilchrist, Catherine Stinson, Peter Viveiros, Stephen Hunt, Sandra Morden, Sarah Williams, Troy St. Johns, Jess Boland, Nicole Hunniford, Nadia Jagar
Regrets: Mike Fitzpatrick, Stuart MacPherson
The meeting was called to order promptly at 2:00 p.m. The virtual session began with a formal land acknowledgment recognizing Queen’s location on the traditional territories of the Anishinaabe and the Haudenosaunee and that today was the National Day for Truth and Reconciliation, reminding folks that ceremonies and events were occurring across campus and beyond.
2. ADOPTION OF THE AGENDA AND PROCESS OF MINUTES
Following the opening acknowledgments, the distributed agenda for the session was reviewed. Given that this was the inaugural meeting of the Operations AI Subcommittee, there were no previous minutes to reference. The agreement was reached to use the transcript of the meeting as the principal record for the minutes. The agenda was discussed in detail, and consensus was reached to adopt it as presented. Members underscored the importance of establishing a robust set of guiding priorities that would ensure coherent institutional decision making as AI tools become increasingly integrated into the operational landscape of Queen’s.
3. ROUNDTABLE INTRODUCTIONS AND EXPECTATIONS
The meeting proceeded with a roundtable session, during which each member offered introductory remarks and articulated their expectations for the subcommittee. The discussion yielded several themes across the group:
A. Unified, Safe Enablement of AI at Queen’s
Members stressed the need for a unified institutional approach to AI, balancing the drive for innovation with a commitment to prudent risk management. There was a shared vision to deliver high-quality, safe AI experiences that mitigate risks while capitalizing on opportunities.
B. Consistency with Flexibility
Participants emphasized that while a consistent set of guidelines was essential, operational realities across different units required a degree of flexibility. The approach should support coherent processes while accommodating the unique needs of diverse units. Members also expressed support for innovative, agile endorsement of AI applications through support of pilot projects for particular use cases.
C. Governance, Ethics, and Values Alignment
Robust ethical frameworks were a central concern. Members discussed the importance of maintaining academic integrity, ensuring privacy and security, and aligning AI deployments with Queen’s mission and core values.
D. Education and Capacity Building
A unanimous call was made for comprehensive AI literacy initiatives for all as well as specialized professional development for those who wished to develop AI solutions to their issues. The aim is to equip the university community with the necessary capabilities to both understand and harness AI effectively within their roles.
E. Human-Centred Approach
The subcommittee agreed that AI should augment human work rather than replace it. A human-centered focus was encouraged, advocating for addressing tedious tasks to allow staff to concentrate on high-value tasks that require human insight and engagement.
F. Coordination and Transparency
Finally, the need for a “one-stop” reference point and improved communication channels was highlighted. Enhanced coordination is expected to reduce fragmented experimentation through guidance and coordination ensuring that the broader community can monitor emerging AI initiatives.
4. OVERVIEW OF GOVERNANCE STRUCTURE AND PURPOSE
A detailed briefing was provided on the emerging governance model – informally known as the “AI Nexus.” This model formalizes the division of responsibilities across three subcommittees:
• Operations (the current body)
• Teaching & Learning
• Research & Research Administration
Key discussion points included:
– The role of subcommittees in an advisory capacity accountable to the Queen’s Digital Planning Committee and the Senior Leadership Team, offering input on issues spanning safety, policy, procurement, and academic alignment.
– The process for transmitting advice through the AI Nexus to senior governance bodies, ensuring that critical recommendations shape resourcing, policy directives, and novel pilot initiatives.
– The prescribed meeting cadence, planned as monthly sessions during the first year with subsequent reassessment of frequency.
– Membership term expectations, with a one-year commitment encouraged and the option for renewal, and clear communication channels primarily via Microsoft Teams and direct correspondence.
The governance overview cemented a mutual understanding that effective management of AI initiatives requires both a rapid decision-making process and careful alignment with Queen’s institutional priorities.
5. REVIEW OF DRAFT TERMS OF REFERENCE (ToR)
A substantial portion of the meeting was dedicated to a structured review of the draft Terms of Reference for the Operations AI Subcommittee. In this review, members debated and refined several aspects focused on “Key Initiatives.” The following categories and associated points were discussed and refined by consensus:
5.1 Person-Centred Scope and Mission Alignment
– It was agreed that language referencing impacts solely “on students” should be expanded to encompass “people/community.” This reframing ensures that the operational benefits of AI are recognized across all constituencies, including staff.
– Moreover, the mandate was revised to clearly emphasize the connection between operational efficiency and Queen’s academic mission, thereby avoiding an exclusively efficiency-driven approach.
5.2 Education, Professional Development, and Use Cases
– The discussion extended beyond the notion of “responsible use” to incorporate initiatives for practical capability building.
– Professional development was proposed to include concrete steps for solution development and process improvement, supported by an extensive AI literacy framework.
– A charge was made for curating relatable, real-world use cases that illustrate where AI can add operational value.
5.3 Evaluation, Procurement, and Agile Experimentation
– An agile assessment pathway was called for to enable rapid yet safe evaluation of emerging tools, with particular attention to privacy, security, and accessibility considerations.
– The committee agreed that any evaluation models should ideally mirror established risk and governance patterns while addressing unique AI-specific concerns such as data flows and model behavior.
5.4 Equity, Bias, and Ethical Assurance
– The need for systematic measures to check and mitigate bias in training data and outputs was earmarked as a high priority
– The ToR must ensure that any implementations are reflective of Queen’s institutional values, equity commitments, and inclusion objectives.
5.5 Spaces for Innovation and Existing Practice
– There was recognition that various units within Queen’s are already experimenting with AI. As such, the subcommittee should both support these efforts and serve as a conduit for connecting different innovation spaces, rather than “creating” entirely new ones.
– It was further proposed that the subcommittee act as an internal focus group, providing early feedback on member-driven pilots to supplement formal security, privacy, and procurement reviews.
5.6 Licensing, Copyright, and Access to Scholarly Content
– The ToR revision will include guidance on responsible content use, particularly with respect to licensed materials and copyright.
– AI literacy initiatives were to incorporate educational material outlining how to avoid unauthorized ingestion of licensed content and ways to discover institutionally licensed scholarly material through appropriate channels.
5.7 Cross-Committee Coordination
– Members concurred that topics relevant to more than one subcommittee should undergo parallel review processes in collaboration with other subcommittees (i.e., Teaching & Learning and Research & Research Administration).
– A mechanism for consolidating and streamlining feedback was deemed essential, ensuring that cross-domain tools benefit from coordinated input.
The facilitator was tasked with incorporating the aforementioned refinements into a revised draft of the ToR. This updated document will be circulated to the cochairs for review, and subsequently to the full committee for comment during the next session.
6. EXAMPLES AND EMERGING PILOTS
During this segment, members briefly discussed several early pilots that are currently in various stages of development. Noteworthy examples included departmental chatbots and other AI-driven tools. Key points discussed in this session were:
– The identification of valued features for institutional deployment
– The potential for knowledge reuse across units if clear guidelines and review processes are established. These emergent pilots are expected to serve as valuable reference points for the development of guidelines and criteria for readiness and release protocols, thereby informing the committee’s broader strategic framework.
7. ESTABLISHING NEAR-TERM PRIORITIES AND INITIATIVES
Time constraints necessitated a pragmatic approach in defining near-term initiatives. As a result, members agreed to compile and submit candidate priorities and initiatives asynchronously to the co-chairs and the SAGAI. These proposals will be consolidated into an initial list for detailed discussion and prioritization at the next meeting. Proposed initiatives include, but are not limited to:
• An AI Literacy Curriculum (“AI Essentials”) designed to generate baseline understanding and capability among staff and faculty.
• Establishing a “Safe to Try” pilot framework aimed at offering lightweight yet secure pilot assessment pathways.
• Developing an AI UseCase Library (“OneStop” Catalogue) containing succinct, relatable examples that outline the problem, approach, and associated risks and benefits.
• Drafting Evaluation & Procurement Guidance, with input from procurement, legal, privacy, and security teams, to ensure that criteria around data residency and model auditability are met.
• Issuing specific guidance on the responsible use of licensed and scholarly content within AI workflows.
• Coordinating effective cross committee reviews, with an emphasis on establishing clear mechanisms for consolidating feedback across all relevant subcommittees.
• Developing readiness criteria for public-facing AI services, including guidelines addressing content quality, tone, safety filters, and continuous monitoring protocols.
These initiatives will form the basis of the committee’s workplan and be integrated into a structured workflow moving forward.
8. NEXT STEPS AND APPROVAL WORKFLOW
As the meeting drew to a close, the following next steps were agreed upon:
A. The secretariat is responsible for preparing a draft of these minutes based on the meeting transcript. This draft will be sent first to the cochairs for review and approval before distribution to the entire committee.
– Target timeline for cochair review: End of the current week.
– Full committee distribution: The following week, subject to signoff.
B. Members are invited to submit additional candidate initiatives asynchronously prior to the next scheduled session.
– A shared document will be circulated for this purpose, which will aid in the compilation and prioritization of proposals.
C. The facilitator is to incorporate the agreed-upon revisions into the draft Terms of Reference. A revised draft will be circulated for further review and comment during the subsequent meeting.
D. Assigned action items – including drafting the AI Literacy Pathway (“AI Essentials”), establishing the “Safe to Try” Framework, curating the AI UseCase Library, and preparing Evaluation & Procurement Guidance – will move forward concurrently, with ownership assigned to the secretariat and key volunteer members. Coordination across IT, security, procurement, legal, and library units is to be initiated immediately.
These clear and actionable next steps underline the committee’s commitment to maintaining momentum and ensuring that AI initiatives are deployed responsibly and in alignment with Queen’s operational and academic missions.
9. ADJOURNMENT
After a comprehensive review of the agenda items and animated discussions on key strategic and operational points, the meeting was adjourned at approximately 2:56 p.m.
Operations AI Subcommittee Meeting Two Minutes
October 23, 2025
Present: Leah Wales, Jeff Glassford, Eleftherios Soleas, Diana Gilchrist, Catherine Stinson, Peter Viveiros, Stephen Hunt, Sandra Morden, Sarah Williams, Troy St. John, Jess Boland, Nicole Hunniford, Nadia Jagar, Brian Chan, Michael Ferguson, Stuart MacPherson
Regrets: Jill McCreary
1. CALL TO ORDER AND AGENDA REVIEW
The meeting was called to order at 1:00 p.m. The agenda was circulated in advance, and members were invited to suggest additions or changes. No revisions were proposed and the agenda was approved as distributed.
2. APPROVAL AND DISCUSSION OF PREVIOUS MINUTES
Members confirmed having reviewed the minutes from the first meeting. Comments included praise for their thoroughness and clarity. Questions were raised about the incorporation of previous discussions into the Terms of Reference (ToR), confirming that a tracked changes version reflecting prior feedback was available for review. The committee agreed to approve the minutes and supported the intent to post approved minutes on the AI governance website to promote transparency and invite ongoing community feedback.
3. COMMUNICATIONS STRATEGY
Discussion ensued on the approaches to internal and external communications related to committee work. Members emphasized the importance of sharing information with their respective groups to maintain transparency and facilitate informed dialogue. It was noted that all AI-related subcommittees would adopt a similar transparency approach by posting their minutes.
The need for a coordinated communication plan at the governance level was acknowledged, with early work underway involving central university communications. The committee observed that specific outreach related to products or initiatives emerging from subcommittees might require additional tailored strategies. Members agreed that this remains an evolving consideration aligned with the progress of committee work.
4. TERMS OF REFERENCE REVIEW
An updated draft of the Terms of Reference was presented, incorporating revisions based on the prior meeting’s discussions. Key points debated included clarifications regarding the committee’s remit around AI usage guidelines, especially with respect to marketing and communications. Instead, the committee would aim to establish overarching guiding principles applicable university-wide and serve as a resource and review body for guidelines developed by specific units.
The committee suggested including explicit language around the review and dissemination of such guidelines to promote awareness and accessibility across the university community.
Ethical considerations related to data privacy and chatbot monitoring were flagged as important for developing AI tool guidelines. Members underscored the need for anonymization of user interactions and clear communication of data usage to end users.
Several members highlighted the significance of maintaining a consistent “Queen’s tone” for chatbots and the need to manage legitimate concerns about conflicting or incorrect information generated by AI systems. The committee recognized that although chatbots and other AI applications currently exist within the university, many operate in siloed fashion, creating potential risks around consistency and governance. There was wide agreement on the need for centralized guidance and coordination to support successful institutional deployment of chatbots.
5. PRIORITIZATION OF AI INITIATIVES
Initial prioritization conversations identified AI chatbots as a prevalent and high-impact area warranting focused attention. Discussion covered the complexity of current chatbot deployments, their varied purposes, and the risks posed by inconsistent content or degraded user trust. Members emphasized developing guidelines encompassing governance, content ownership, and evaluation criteria.
Other emerging priorities included resume screening automation, scheduling tools, and foundational AI literacy initiatives. The committee discussed plans to consolidate and condense the extensive list of submitted use cases and proposals to facilitate focused deliberation at upcoming meetings.
The secretariat committed to circulating a curated and deduplicated list of initiatives as a precursor to the next session. Additionally, a review of existing AI literacy efforts led by ITS, the library, and the Centre for Teaching and Learning would be compiled for committee consideration by the special advisor.
6. ETHICAL AND PRIVACY CONSIDERATIONS FOR CHATBOT USAGE
Participants reflected on ethical dimensions surrounding chatbot user data, including privacy, access controls, and transparency. It was suggested that any logging or transcription of chatbot interactions should be strictly anonymized and regulated to prevent misuse.
Input from legal and privacy specialists was recommended to guide these considerations in alignment with institutional obligations.
7. DEMONSTRATION OFFER
An offer was made by Co-chair to provide live demonstrations of existing AI chatbot technologies deployed on campus to aid committee understanding and support informed decision-making.
8. NEXT STEPS
-
Finalize the updated Terms of Reference incorporating discussed amendments.
-
Circulate a consolidated list of AI initiative proposals for prioritization.
-
Gather and present an inventory of AI literacy programs active across relevant units.
-
Continue deliberations on chatbot governance, privacy, and ethical guidelines.
9. ADJOURNMENT
The meeting was adjourned at approximately 2:00 p.m. The committee thanked all members for their valuable contributions and engagement.
Operations AI Subcommittee Meeting 3 Minutes
Date: November 21, 2025
Time: 11:00 AM – 12:00 PM
Location: Virtual Meeting
Present: Jeff Glassford (Co-Chair; Chairing this meeting), Leah Wales (Co-Chair), Eleftherios Soleas (Special Advisor, Generative AI; SAGAI), Stephen Hunt, Sarah Williams, Stuart McPherson, Peter Viveiros, Troy St John, Sandra Morden, Brian Chan, Nicole Hunniford, Jess Boland, and Michael Ferguson, Ahmed Samy (Guest)
Regrets: Diana Gilchrist
Agenda and Purpose
-
Review ongoing AI literacy initiatives and proposals
-
Discuss proposed AI Operations Education course
-
Demonstrate and discuss the Quan chatbot platform
-
Review consolidated AI project priorities
-
Plan next steps including security assessment presentation
1. Welcome and Introductions
Meeting opened by Chair with a land acknowledgment. Approval of previous minutes was confirmed without amendments. Introductions were made for new or returning participants.
2. AI Literacy Initiatives Update
SAGAI presented an overview of AI literacy efforts occurring across campus, highlighting resources from the Center for Teaching and Learning, Student Academic Success Services, and the Library. Key points included:
-
Development of foundational AI literacy modules focusing on vocabulary and concepts
-
Campus-wide community of practice and sandbox events for faculty and staff
-
Student-facing modules integrated into first-year academic programs and offerings
-
Coordination with Communications to improve awareness and accessibility of AI resources
Members endorsed centralizing information for easier access and visibility.
3. Proposed AI Operations Education Course
Eleftherios shared a draft proposal for a four-session cohort-based AI operations course designed to equip staff with skills to identify operational challenges and apply AI solutions. Outline includes:
-
Session 1: Understanding AI fundamentals and capabilities
-
Session 2: Identifying AI-appropriate problems and prototyping solutions
-
Session 3: Logistics of implementation, including finance, approvals, and building coalitions
-
Session 4: Presentation of participant proposals to university leaders
The course requires an application process where participants articulate a problem and secure supervisor support. The format includes in-person sessions complemented by asynchronous learning.
Feedback from members stressed:
-
The importance of making the course accessible to a broad range of staff, not only managers
-
The need for structured work sessions between meetings to foster progress
-
Clarification that course participants will primarily integrate existing AI tools rather than build new AI systems
-
Recommended prerequisites include completion of foundational AI literacy training before participation
4. Quan Chatbot Platform Demonstration
University AI architect demonstrated the Quan chatbot platform featuring:
-
Rapid creation of chatbots based on intake of documents or SharePoint sites
-
Capabilities for prompt engineering, setting accuracy and creativity parameters
-
Features like embedding into external sites, session preservation, and access controls allowing collaboration on chatbots
-
Logging and analytics tools to monitor questions asked and bot responses
-
Management of overlapping or contradictory content through content owner governance
-
Upcoming features include scheduled retraining and stricter security guardrails to prevent bias and harmful content
Members discussed usage scenarios, content governance, and emphasized the need for clear guidelines, dedicated resources, and ongoing maintenance to ensure chatbot quality.
Questions addressed included handling conflicting data, collaborative access, retraining frequency, and building trust in AI outputs.
5. Consolidated AI Project Priorities
Eleftherios introduced a compiled list of 28 AI initiatives gathered from subcommittee members and campus stakeholders. Due to volume, members were invited to review and provide feedback and prioritization offline before the next meeting.
6. Security Assessment and Next Steps
Chair Jeff Glassford announced that Matt Simpson will join the next meeting to present on integrating AI tool security considerations into existing security assessment processes (SAP). The goal is to develop clear security guidelines aligned with subcommittee principles.
Motions and Decisions
-
Previous meeting minutes approved as circulated
-
Agreement to circulate draft Operations AI course materials and consolidated priorities for member feedback
-
Next meeting to include security presentation with a focus on AI tool assessments
|
Action Item |
Owner |
Deadline |
Notes |
|
Circulate draft Operations AI course materials and seek feedback |
Eleftherios Soleas |
Before next meeting |
Feedback to guide course refinement |
|
Distribute consolidated AI initiatives list for review |
Eleftherios Soleas |
Before next meeting |
Members to prioritize and comment |
|
Prepare security assessment briefing for AI tools |
Matt Simpson |
Next meeting |
Outline integration with SAP process |
|
Increase communications on centralized AI literacy resources |
Communications Team / Eleftherios Soleas |
Ongoing |
Improve accessibility and awareness |
Next Meeting
Date: December 16th.
Focus: Review of Operations AI course feedback, prioritization of AI initiatives, security assessment presentation by Matt Simpson
Adjournment
The meeting was adjourned with thanks from the Chair to all participants.
Operations AI Subcommittee Meeting 4 Minutes
Date: December 16th, 2025
Time: 2:00 PM – 3:00 PM
Location: Virtual Meeting
Present: Jeff Glassford (Co-Chair), Leah Wales (Co-Chair; chairing this meeting), Eleftherios Soleas (Special Advisor, Generative AI; SAGAI), Stephen Hunt, Sarah Williams, Nadia Jagar, Stuart McPherson, Peter Viveiros, Troy St John, Sandra Morden, Nicole Hunniford, Jess Boland, and Michael Ferguson, Diana Gilchrist, and Matthew Simpson (Guest)
Regrets: Catherine Stinson, Brian Chan, Jill McCreary
Agenda and Purpose
-
Security Assessment Process and Discussion of Authentication for Applications that include AI
-
Review Consolidated Priorities & Initiatives
1. Welcome and Introductions
Approval of previous minutes was confirmed without amendments. Introductions were made for our guest, Matthew Simpson of ITS.
-
Presentation and discussion: Security assessment process and Authentication
-
Matt Simpson (Information Security Office) described work to develop acceptance/decision criteria for Microsoft Authentication consent grants and how those approvals intersect with the security assessment process.
-
Background: When users click “Sign in with Microsoft” an Auth consent grant may be created granting an external application access to user data (varies from basic profile to broad rights like mailbox/OneDrive/SharePoint).
-
Objective: Automate decision-making where possible and provide Identity and Access Operations (IAO) with clear, repeatable criteria so high-risk or AI-tagged applications are escalated for additional review rather than being self-approved by end users.
How the proposed process will work (high level)
-
IAO will evaluate new application access requests (e.g., via ServiceNow) and check whether the application exists and how it is categorized for Cloud Apps / Cloud App Catalog.
-
If an application is categorized as AI or has higher-risk permissions, the proposal is to surface it to this committee (Operations AI) for input/approval or to apply a pre-agreed automated decision rule.
-
Aim: reduce ad hoc approvals while enabling timely access for legitimate low-risk apps.
Key discussion points and concerns raised by committee
-
Data access, retention and location:
-
What specific data will the app access and retain?
-
Where is data stored and processed (e.g., if stored in Canada but processed offshore)?
-
How long is data retained?
-
GDPR / privacy/regulatory implications for storage and processing locations.
-
Thirdparty sharing and model training:
-
Will prompts or other data be used to train vendor models or otherwise be shared with third parties?
-
If vendor uses customer prompts to train models, that should be a disqualifying factor (suggested).
-
AI-specific vs. general SAP questions:
-
Several members observed many AI questions overlap with existing security / SAP assessments; the committee discussed defining the narrow set of AIspecific questions that should be automated.
-
Volume and operational capacity:
-
Concern about review volume; need to automate simple yes/no questions where feasible and only escalate the higher-risk cases to the committee.
-
Scoring / decision model:
-
Members suggested using a scoring approach (multiple answers weighted) rather than a single yes/no question; extreme negative answers could veto approval outright.
-
Mitigations and escalation:
-
Where low scores arise, consider whether mitigations are possible (some risk items may have no acceptable mitigation).
-
Escalation pathways and logging/visibility for confidential prompts and chatbot logs were raised (who can access logs, auditability).
-
Scholarly/licensed content:
-
Library raised concerns about AI interacting with licensed/scholarly content (copyright/ licensing obligations) and the need to avoid model training on restricted content.
-
Use cases and ubiquity of AI:
-
Many common tools (browsers, productivity apps) increasingly include AI features; the AI tag in catalogs will broaden and potentially become ubiquitous.
-
Committee needs to define boundaries and be pragmatic.
-
Automation and inputs:
-
Committee members asked for the initial criteria and suggested that IAO provide lists of applications in AI categories to prioritize review and automation.
-
Alignment and prioritization:
-
Members suggested the committee should align prioritization with university strategy/mission and the foundational principles already published on the university AI website.
Decisions / agreements from discussion
-
The committee agreed that:
-
The next practical step is for Matt to draft a set of AIspecific assessment questions (based on today’s discussion) that could be automated into the IAO workflow.
-
The committee will review those draft questions and the university’s AI principles to determine which questions are already covered by existing processes and which are new/AIspecific.
-
A multi-factor scoring approach is preferred; individual catastrophic answers should be able to block approval.
-
The committee should prioritize building/confirming foundational principles and then use those principles to evaluate and prioritize specific projects.
Action items arising (owner and target)
-
Draft AI-specific decision questions for the IAO process
-
Owner: Matt Simpson (Information Security Office)
-
Notes: Incorporate items discussed (data accessed/retention/location; third-party sharing; model training; access by vendor/third parties; regulatory aspects). Provide the initial criteria and example workflow to the committee.
-
Target: Share initial draft with Terry / committee for review (Matt volunteered to send; expected before Jan meeting).
-
Identity & Access Operations / IAM to provide the committee with app data (as required)
-
Owner: Identity & Access Operations (to coordinate with Matt)
-
Notes: Consider providing the list of applications flagged in Microsoft Defender Cloud App Catalog (AI-tagged) or otherwise relevant data to inform automation / prioritization.
-
Target: Coordinate timing with Matt and Terry.
-
Priorities document — open discussion (led by Terry / Leah)
-
Terry explained the consolidation approach to the priorities list and the intent to keep items broad enough to represent multiple parts of the university.
-
Members suggested approaches to prioritize (committee ranking survey; choose demonstrable cross-university pilot projects; central vs. local responsibilities).
-
Nadia and others emphasized the value of establishing foundational principles/guidelines first (so local pilots/tools conform to consistent practices) and then deciding which specific priority projects to pursue centrally.
-
Decision: Committee endorsed the approach of mapping priorities to the established AI principles, generating a short ranking exercise, and then using those results to identify initial focus projects (practical pilot(s)).
Other notes
-
The committee discussed use-case pilots (e.g., high-volume email boxes, registrar, scheduling, scribe/automation tools) as potential demonstration projects that could show productivity gains across units. The group noted some use-cases may require central coordination (e.g., recruitment A I use affected by legislation).
-
A committee member emphasized licensed/scholarly content concerns and suggested the committee apply principles to concrete examples to identify gaps.
-
Jeff suggested using university strategic priorities (mission/vision/values) as an additional sorting criterion in prioritization.
Action items recap (owners & tentative timing)
-
Matt Simpson — Draft AI-specific decision questions and initial criteria; send to Terry / committee (January; for either the next meeting or the one after)
-
Eleftherios (Terry) Soleas — Consolidate, map priorities to AI principles, prepare ranking/survey instrument; circulate to committee (target: early January).
-
All committee members — Review university AI principles and consolidated priorities; be prepared to discuss at Jan 13 meeting.
Next Meeting
Date: January 13th
Focus: Review of Consolidated Priorities
Adjournment
The meeting was adjourned with thanks from the Chair to all participants.
Research and Research Administration Subcommittee
Mandate: To ensure ethical, responsible and creative use of AI in scholarly research and academic pursuits at Queen’s.
Membership: Chaired by the delegate of the Vice-Principal Research (For 2025-2026 it is Dr. Amir Fam), with a diverse pool of faculty, staff, administrators, and students.
Key Initiatives:
-
Address both emerging AI-related opportunities and concerns in research.
-
Engage with Queen’s research community on AI’s impact.
-
Develop guidelines informed by government agencies and peer institutions.
-
Prioritize AI tool development or adaptation related to research and/or research administration that would offer the maximum benefit to the Queen’s research community
-
Consult departments on ethical AI applications.
-
Provide resources and propose a set of training opportunities for responsible AI use in scholarly activities.
Meeting Frequency & Communication
All Nexus Panels will follow the same structure as the AI Nexus, meeting monthly for the first year (for this subcommittee meetings will be in person with on-line Zoom access for members with extenuating circumstances), with subcommittees convening in between. Communication will be conducted primarily through a newly developed Microsoft Teams Group and email communications with the Chair and Special Advisor on Generative AI.
Duration
The AI Nexus and Nexus Subcommittees will operate for an initial term of two years, with the possibility of renewal based on institutional priorities.
|
Name |
Representation |
|---|---|
|
Amir Fan (Chair) |
SGS and VPR Research, Engineering |
|
Karen Samis |
Office of Vice-Principal Research |
|
Gunnar Blohm |
Biomedical and Molecular Sciences |
|
Murray Lei |
School of Business |
|
Maggie Gordon |
Library & Archives |
|
Xiaodan Zhou |
Engineering and School of Computing |
|
Samuel Dahan |
Law |
|
Tracy Trothen |
Religion and Rehabilitation Therapy |
|
Jacqueline Galica |
QHS/Cancer Clinical Trials |
|
Ian Matheson |
SGPSA/ Education |
|
Noreen Haun |
Computing |
|
Jennifer Hossek |
Gender Studies, Language and Literature |
|
Kyster Nanan |
Molecular Pathology |
|
Meghan Roth |
SGPS Representative |
|
Il-Min Kim |
Engineering and School of Computing |
|
Eleftherios Soleas |
Office of the Provost |
Meeting Minutes
Meeting Summary Minutes
Date: September 24th, 2025
Duration: 1-Hour
Format: In-person meeting with virtual participation option
1. Opening and Administrative Matters
- The committee commenced with a land acknowledgement, recognizing the Indigenous peoples as traditional custodians of the university's lands and acknowledging the need to consider environmental sustainability, thoughtful use of technology, and shared humanity as upholding the values of reconciliation and justice.
- The Chair welcomed all members attending both in-person and online and noted apologies from members unable to attend.
- The meeting agenda was reviewed, encompassing introductions, committee scope and purpose, member priorities, and subsequent steps.
Present: Amir Fam (Chair), Vera Kettnaker, Il-Min Kim, Gunnar Blohm, Maggie Gordon, Jacqueline Galica, Kyster Nanan, Karen Samis, Jennifer Hosek, Patty Douglas, Murray Lei, Tracy Trothen, Noreen Haun, Xiaodan Zhu, and Eleftherios Soleas (Special Advisor to the Provost on Generative AI: SAGAI).
Regrets: Samuel Dahan, Megan Roth
2. Committee Introduction and Member Overview
- Each member introduced themselves, detailing their academic backgrounds and areas of expertise. Committee concluded that we have representation from across campus and experiences on the roster. Ex officio and ad hoc members can be added as needed.
- Representation spanned engineering (civil, electrical/computer), health sciences, humanities, nursing, business, education, research services administration, information literacy, sciences, AI-related research support, ethics, interdisciplinary approaches, and research administration practices.
3. Committee Purpose, Scope, and Organizational Context
- The committee is tasked with addressing AI’s role in two broad domains: AI as a component/tool of research and AI’s impact on research administration practices.
- Emphasis was placed on the committee’s purpose to develop practical, ethical, and adaptable guidance and tools supporting researchers and administrators.
- The committee operates as a subcommittee interfacing with related groups focused on teaching and learning and operational concerns within university AI governance: The Queen’s AI Nexus which reports into the Senior Leadership Team through the Queen’s Digital Planning Committee (QDPC).
- It is recognized that the field is evolving and is constantly changing. As such, the mandate of the subcommittee is viewed in the context of a continuous long term process independent of the term served by members
- An in-person primary, hybrid meeting modality was acknowledged to maximize participation and accessibility for those serving on the committee, while emphasizing the benefit of in-person interaction. On-line participation would be the exception, for extenuating circumstances.
4. Key Themes and Issues Identified by the Committee
4.1 Ethical and Responsible Use
- Committee members underscored the critical need for establishing ethical principles guiding AI use in research, including:
- Proper citation and authorship attribution.
- Transparency regarding AI assistance or augmentation.
- Avoidance of plagiarism or uncited AI-generated content.
- The humanities and social sciences were highlighted as essential contributors to framing ethical considerations and understanding AI’s societal impact.
- Maintaining integrity in research culture was seen as vital, particularly in fields where AI-generated content could blur lines of originality.
4.2 Researcher and Student Guidance
- Members shared concerns on how AI may be used by students and researchers, covering:
- Extent of permissible AI assistance in manuscripts, grant proposals, and peer reviews.
- Anxieties among students and employees about AI potentially replacing jobs, affecting enrollment and career prospects.
- Need for clear, consistent institutional guidelines for AI use in research and educational contexts.
4.3 Operational Use and Administrative Efficiencies
- Possible applications of AI to alleviate administrative burdens were discussed, including:
- Automating alignment checks between grant proposals, budgets, and central databases.
- Enhancing legal and service agreement drafting with AI-assisted language recognition (without replacing lawyers).
- Creating user-friendly chatbots for research logistics and funding opportunity navigation.
- Improving documentation and meeting minutes through AI transcription and summarization.
- Attention was cautioned on potential “vicious cycles” where AI layers bureaucratic review upon review, which could lead to inefficiencies.
4.4 Security and Infrastructure Considerations
- Concerns about data security with third-party vendors were widely expressed, especially regarding:
- Confidential research information.
- Vendor ownership, geopolitical risk, and dependency.
- Potential inability to access or retrieve data if access is revoked.
- Recommendations included exploring local or enterprise-hosted AI models to maintain control and security.
- Suggested investments in hardware and software infrastructure to support these local AI solutions.
4.5 Inclusivity and Interdisciplinary Perspectives
- The importance of involving diverse disciplinary perspectives was reaffirmed, with calls to elevate voices from humanities, philosophy, and social justice into AI discussions.
- Accessibility considerations were emphasized, including supporting neurodivergent populations within research.
- Preparing students to thrive in a future shaped by AI was discussed as a core university responsibility.
5. Summarized Contributions and Initiatives from Committee Members
- AI assistance tailored for students with English as an additional language to support research communication.
- Evaluating the appropriate extent of AI use in grant proposals and research outputs.
- Alerts to the chilling threats posed to humanities if AI use goes uncited, risking the foundation of academic dialogue.
- Focus on actionable frameworks for socially responsible AI innovation and research impacts.
- Development of a research AI framework including security risk evaluation.
- Preparing researchers to critically evaluate AI tools’ potential to disrupt academic integrity.
- Adherence to funders’ guidelines (e.g., Tri-Council) around AI use in research.
- Addressing students’ and employees anxiety about job security in an AI-augmented future.
- Ensuring research on AI incorporates humanities and ethics to complement technical AI research.
- Improving accessibility in research support and communication.
- University role in attracting and preparing students via instructional guidelines on evaluation.
- Investing in local hardware and software resources to host secure AI models.
- Incorporating community engagement and participant communication importance in research amidst AI evolution.
- Operational pilots for proposal reviews, legal agreement assistance, research services chatbots, and administrative task automation.
- Emphasizing data-informed decision-making for adopting AI tools in research administration.
6. Organizational Decisions and Process Recommendations
- The committee agreed to extend the meeting duration to 90 minutes monthly to accommodate deep discussion.
- Emphasis on leveraging offline work (e.g., priority collation, environmental scans, drafting) to maximize in-person meeting productivity.
- Members to participate in priority
- Adoption of collaborative digital platforms (Teams) to collect documents, meeting minutes, surveys, policy drafts, and research materials.
- Agreement on iterative refinement of the committee’s terms of reference and scope based on emerging AI trends and university needs.
7. Action Items and Next Steps
| Action Item | Description | Lead | Suggested Due Date |
|---|---|---|---|
| Priority Collection | Chair / SAGAI | Before next meeting | |
| Review Terms of Reference (ToR) in Teams Folder | Subcommittee members review terms of reference and propose changes and comments for improvement that the committee will consider | Subcommittee members | Before Next meeting |
| Environmental Scan | Update and expand an environmental scan of comparable university AI governance policies and frameworks. | SAGAI | Immediate |
| Resource Repository Setup | Create and organize a Teams folder for shared resources: policies, training, use cases, scans. | SAGAI/ Subcommittee Administrative Support | Immediate |
| Researcher Guidance Draft | Consider practical, clear guidelines covering ethical AI use, citation norms, and disclosure expectations in research. | Subcommittee members | To be discussed next meeting |
| Pilot Project Identification | Identify potential to collaborate with Research Services and IT to identify feasible AI-enabled admin pilot projects. | Subcommittee members | To be discussed next meeting |
| Training Program Outline | Review a modular training curriculum on AI literacy, ethics, and secure AI usage targeted at faculty, students, and staff. | Subcommittee members | To be discussed next meeting |
8. Closing Remarks
- The Chair emphasized the committee’s mission to enhance researcher productivity and ethical AI integration while fostering broader communities of practice.
- The importance of continuous alignment with university governance bodies (Provost’s Office, Senior Leadership Team, Digital Planning Committee, Senate, Board of Trustees) was noted.
- Members expressed enthusiasm for the collaborative, interdisciplinary approach and appreciation for the balance of philosophical and practical perspectives.
- The next meeting date and agenda will be announced shortly. In the meantime, the members will work on priority initiatives and idea collections and draft committee Term of Reference review.
Meeting Summary Minutes
Date: October 24th, 2025
Duration: 1-2:30PM
Format: In-person meeting with virtual participation option
1. Opening and Administrative Matters
- The meeting commenced with a welcome from the Chair, Amir Fam, who acknowledged participants joining both in-person and virtually.
- Apologies were noted for members unable to attend in person.
- The Chair emphasized the confidentiality of committee discussions due to the sensitive nature of AI-related deliberations and the potential for premature information release to cause anxiety without context.
- The agenda was reviewed and approved with no modifications.
Present: Amir Fam (Chair), Eleftherios Soleas (SAGAI), Gunnar Blohm, Il-Min Kim, Jacqueline Galica, Jennifer Hosek, Karen Samis, Kyster Nanan, Maggie Gordon, Murray Lei, Megan Roth, Samuel Dahan, Tracy Trothen, Xiaodan Zhu, Vera Kettnaker, and Noreen Haun
Regrets: Patricia Douglas
2. Member Introductions
- New members Megan Roth (Society of Graduate and Professional Students) and Samuel Dahan (Law Professor, Conflict Analytics Lab) introduced themselves.
- Returning members briefly restated their roles, areas of expertise, and previous contributions to AI research or administration.
3. Review and Approval of Previous Meeting Minutes
- Minor corrections were made to the spelling of members' names.
- The amended minutes from the September 24th meeting were unanimously approved and will be posted.
4. Terms of Reference (ToR) Discussion
- Feedback submitted by members was reviewed.
- Discussion centered on ensuring the inclusivity of the research community, encompassing graduate and professional students, staff, postdocs, librarians, archivists, and faculty from diverse disciplines, including humanities and social sciences.
- The language of "research" versus "scholarly activity" was debated; consensus was to use "scholarly activity" for broader inclusiveness in documentation.
- The Committee agreed to clarify definitions within the ToR to avoid ambiguity for future reference.
5. Member Priority Initiatives and Ideas
- Members shared their top priorities, which broadly aligned with:
- Developing AI literacy and education tailored to the research community.
- Building and adapting AI tools to support research activities and administration efficiently.
- Ensuring ethical, transparent, and responsible use of AI, including appropriate citation, disclosure, and authorship standards.
- Addressing concerns over data privacy, security, and vendor risk in AI applications.
- Promoting inclusivity, particularly elevating humanities, social justice, and Indigenous perspectives in AI governance.
- Considering environmental impacts of AI use and infrastructure needs.
- Enhancing trust and receptivity among research participants and the broader community.
- The necessity of balancing rapid AI development with responsible usage was emphasized.
- Concerns regarding institutional coordination to prevent redundancy and allow for synergy in AI education and tool development were raised.
6. Key Themes from Discussion
- AI Literacy: A need for layered, accessible AI literacy programs tailored for diverse audiences within the university was highlighted, including the concept of critical AI literacy encompassing societal and ethical impacts.
- Tool Development and Adaptation: The Committee discussed prioritizing the creation or adaptation of AI tools to facilitate research tasks, feedback, communication, and administrative processes.
- Ethical Use and Guidelines: Members underscored establishing clear, adaptable guidelines to support responsible AI integration in research, respecting disciplinary differences.
- Security and Risk Management: The importance of centralized oversight to assess AI tools for privacy and data security was noted.
- Inclusivity and Social Justice: The Committee reaffirmed the imperative to include marginalized voices and communities in AI governance and to be mindful of global social justice implications.
- Environmental Considerations: Members recognized the environmental footprint of AI technologies and the need to consider sustainability in adoption strategies.
7. Next Steps and Action Items
- SAGAI will consolidate members' priority initiatives, and conduct a preliminary environmental scan of comparable universities’ AI governance approaches
- Provide a summary of ongoing AI literacy efforts across campus in an effort to coordinate across units offering AI education and support will continue to reduce redundancy and align resources.
- A preliminary draft of guidelines around ethical and responsible AI use in research and administration will be prepared for committee review.
- The possibility to explore pilot projects for AI-enabled research administration tools and processes will be discussed in subsequent meetings.
- Members are encouraged to continue providing feedback on priorities documents
8. Closing Remarks
- Members expressed appreciation for the rich interdisciplinary discussion and collaboration.
Meeting Summary Minutes
Date: November 25th, 2025
Duration: 90 minutes
Format: In-person meeting with virtual participation option
Present: Amir Fam (Chair), Eleftherios Soleas (SAGAI), Gunnar Blohm, Il-Min Kim, Jacqueline Galica, Jennifer Hosek, Karen Samis, Kyster Nanan, Maggie Gordon, Murray Lei, Megan Roth, Vera Kettnaker, and Noreen Haun
Regrets: Samuel Dahan, Tracy Trothen, Xiaodan Zhu
1. Opening and Administrative Matters
- The meeting commenced with a welcome from the Chair, who acknowledged participants joining both in-person and virtually.
- The agenda for the meeting was reviewed and approved, with an additional item introduced regarding an update from the Tri-Council on AI.
2. Review and Approval of Previous Meeting Minutes
- Members reviewed the minutes from the last meeting held in October
- The amended minutes were unanimously approved and will be posted.
3. Agenda Additions and Updates
- The Chair proposed an amendment to the agenda to include a brief presentation by Vera regarding recent developments and recommendations from the Tri-Council concerning Artificial Intelligence.
- Members agreed on the placement of this presentation after the approval of the minutes.
4. Presentation by Vera on AI Developments
- Vera provided a summary of the Tri-Council's recent input to the Canadian AI Task Force, highlighting three main points:
- AI for Science - There is a shift from funding research on AI to funding AI-supported research.
- Talent as the Future AI Engine - Emphasis on the need for training individuals to use AI responsibly.
- Infrastructure and Data Management - A call for continuous funding to maintain and interconnect fragmented databases to protect Canadian values in AI research.
- Members engaged in a discussion about these points, underscoring the importance of understanding AI's implications on research and data management.
5. Ethical Use of AI in Research
- The group discussed ethical concerns regarding the use of AI in evaluations and research publications.
- Updates were provided on NSERC's announcement regarding the relaxation of rules concerning AI in funding applications. It was noted that applicants no longer need to disclose AI usage in the application preparation but must still do so in publications.
6. Insights from Recent Conferences
- Members shared insights from recent conferences, including the Canadian Science Policy Conference, which positively showcased AI discussions.
- Key topics included bridging trust and accountability in AI governance, emphasizing ethical and community perspectives.
7. AI Literacy and Integration in Education
- SAGAI presented an update on AI literacy initiatives within the university, highlighting:
- Community practice sessions for faculty on AI topics.
- Development of online modules on academic integrity in the context of AI for first-year students.
- Members discussed the need for tailored AI education programs to cater to diverse audiences, fostering an understanding of responsible AI usage in research and administrative processes.
8. Considerations for Inclusivity and Data Management
- Discussion points included the necessity for inclusive dialogue within AI governance, particularly in relation to social justice issues.
- Several members voiced concerns regarding data fragmentation and the urgent requirement for a comprehensive data management strategy to support AI initiatives at the university.
9. Proposed Guidelines for Responsible AI Usage
- Draft guidelines for the responsible use of AI in research were introduced for discussion, covering:
- Definition of acceptable AI usage in scholarly outputs.
- Responsibilities of researchers and administrative staff in assessing and documenting AI tool utilization.
- Suggestions were made regarding the language used within the guidelines to ensure clarity and appropriateness for all audiences.
- We will continue to review and refine these draft guidelines offline between this meeting and the next.
10. Closing Remarks
- The Chair thanked members for their contributions and emphasized the significance of collaboration in shaping AI governance and usage policies.
- Members expressed appreciation for the in-depth discussions and agreed to continue refining the guidelines and exploring strategic initiatives in subsequent meetings.
Next Steps and Action Items
- Submit feedback on the AI usage guidelines to SAGAI by next meeting
- Review list of priority initiatives and provide feedback to SAGAI by next meeting
Meeting Summary Minutes
Date: December 15th, 2025
Duration: 90 minutes
Format: In-person meeting with virtual participation option
Present: Amir Fam (Chair), Eleftherios Soleas (SAGAI), Gunnar Blohm, Il-Min Kim, Jacqueline Galica, Karen Samis, Tracy Trothen, Xiaodan Zhu, Kyster Nanan, Maggie Gordon, Murray Lei, Megan Roth, Vera Kettnaker, and Noreen Haun
Regrets: Samuel Dahan, Jennifer Hosek
1. Opening and approval of previous minutes
- The minutes were approved
2. Feedback on AI Guidelines in Research: Discussions on revisions and next steps
- Document and terminology
- Committee appreciated the feedback provided. The guidelines will be cleaned up
- (excessive bullets/formatting) and revised to reflect the group’s comments.
- Two terms were explicitly distinguished and defined:
- “Human in the loop” — system architecture in which a human operator is embedded as an essential component of a decision or control pipeline.
- “Human oversight” — a person retains responsibility for results produced by an AI tool they use.
- Members agreed that consistent, rigorous terminology improves the document’s credibility.
- International / multi-institution collaborations
- Concern raised about different partners operating under different rules or standards
- (domestic and international).
- Consensus: guidelines should encourage researchers to be mindful of partner standards and to consult existing data transfer / collaboration agreements rather than attempt to unilaterally regulate partner institutions.
- Suggested text: remind researchers to review established collaboration agreements, NDAs, funder terms, and to discuss AI/data use at project outset.
- Recommendation to consult Research Legal/Research Services to determine whether explicit AI-related language should be added to data transfer templates (recognizing capacity and complexity concerns).
- Data safeguarding and model release risks
- Points raised about risks beyond directly uploading data to third-party tools (e.g., releasing trained models that can leak training data; extraction attacks).
- Need to expand “safeguarding data” to cover broader privacy and model-release risks and to include guidance on breach response.
- Accuracy, fairness and equitable access
- Recommendation to split “accuracy” and “fairness” into separate sections.
- “Fairness” to include considerations of equitable access to institutional AI tools (institutional licensing vs. ad hoc access that advantages well-funded labs).
- Acknowledge that inequities exist and recommend consideration of institutional provisioning where appropriate.
- Transparency / disclosure
- Members sought clearer guidance on when and to whom use of AI must be disclosed (e.g., participants in research, data owners, supervisors, co-authors, journals, grant applicants).
- Distinction noted between:
- Routine, low-risk uses (e.g., grammar checks) where some funders do not require disclosure.
- Uses involving others’ data, identifiable data, or outputs that affect third parties
(where consent and disclosure are expected).
- Recommendation: clarify scope and provide examples of “when expected” disclosures (e.g., participant consent forms, peer review handling of third-party IP/data, grant authorship/ownership cases).
- Agentic AI and emerging technologies
- Committee noted agentic systems are already here; guideline should be forward-looking and include references to agentic AI where relevant.
- Research ethics and REB checklists
- Suggested to include recommendations for Research Ethics Boards (REBs) to consider AI-related questions in consent forms and ethics checklists (particularly where AI agents interact with human participants or process participant data).
Decisions and next steps
- The SAGAI will clean up the draft guidelines and incorporate edits that do not require full committee deliberation (terminology clarifications, broken links, basic expansions).
- Areas flagged as requiring further input (legal, technical model-release risks, REB checklist items, cross-institution enforcement) will be highlighted for targeted follow-up and consultation.
3. Draft consolidated priorities discussion
- The draft priorities document mixes strategic and tactical items; members recommend separating them into:
- Short-term, actionable “low-hanging fruit”
- Medium/longer-term strategic initiatives
- Suggested immediate priorities / low-effort wins:
- Short “limits of AI” / capability guide (snackable, multi-format — slides, one-pagers).
- Highlight and aggregate existing vetted resources (library resources, tri-agency guidance) rather than reinventing materials.
- A short checklist for just-in-time decision support (e.g., bias, hallucination, data owner consent, breach steps).
- Short hands-on workshops or recorded demonstrations (case-study prompt examples), plus recorded sessions to scale delivery.
- A central web presence / resource library (and optional newsletter) to publish committee outputs and curate materials.
- Training and formats:
- Different audiences have different needs (faculty, researchers, administrators, students); resources should be tailored.
- Offer multiple modalities: short videos (“snackable”), two-page guides, interactive sandboxes/workshops, and a searchable resource library.
- Sensitivities
- Be mindful that some community members ethically or politically object to using AI (e.g., concerns about training on copyrighted/stolen data). Guidance should respect such positions and avoid marginalizing those who opt out.
- Next steps
- Put consolidated priorities into a survey to allow the committee to rank items and identify which to pursue first.
- Actions and owners (agreed)
- Clean up the AI guidelines document (formatting, broken links, definitions: “human in the loop” and “human oversight,” split accuracy/fairness, expand transparency/disclosure language
- Highlight areas needing legal, technical, or REB input.
- Circulate revised draft to committee prior to next meeting.
5. Priorities for next meeting
- Review of revised AI guidelines (document edited and annotated for sections requiring committee/legal input).
- Review consolidated priorities and survey results (identify items to pursue in Year 1).
- Discuss development of the first outputs (capability limits guide, checklist, resource library page, recorded demo/workshop).
6. Adjournment
- Meeting closed with seasonal greetings. Committee to reconvene in the new year; Chair and SAGAI will circulate their respective drafts and the survey beforehand.
Teaching and Learning Subcommittee
Mandate: To provide consultation, develop guidelines, endorse best practices, and support the ethical and effective use of AI in teaching and learning.
Membership: Chaired by the delegate of the Vice-Provost Teaching and Learning (For 2025-2026 it is Dr. James Fraser), with a diverse pool of faculty, staff, administrators, and students.
Key Initiatives:
-
Advise on policy updates related to AI in education.
-
Develop resources and professional development opportunities in AI literacy.
-
Foster dialogues between educators and students on AI’s role in learning.
-
Endorse AI policies and evaluation tools for student assessments.
Meeting Frequency & Communication
All Nexus Panels will follow the same structure as the AI Nexus, meeting monthly for the first year, with subcommittees convening in between. Communication will be conducted primarily through Microsoft Teams and engagement with the Special Advisor on Generative AI.
Duration
The AI Nexus and Nexus Panels will operate for an initial term of two years, with the possibility of renewal based on institutional priorities.
|
Name |
Representation |
|---|---|
|
James Fraser (Chair) |
Arts/Sciences, Graduate Faculty, Physics |
|
Christian Muise |
Arts and Sciences, Graduate and Undergraduate |
|
Brian Frank |
Engineering |
|
Richard Reeve |
Education |
|
Rosemary Wilson |
Nursing/Health Quality Improvement |
|
Scott Whetstone |
QEDN, Law |
|
Scott-Morgan Straker |
English Literature and Creative Writing |
|
Susan Korba |
Academic Integrity and Student Academic Success Services |
|
Erica Friesen |
Library/Faculty of Law |
|
Prameet Sheth |
DBMS/KHSC |
|
Stephen Thomas |
Smith Business |
|
Satish Kumar Kotha |
Engineering |
|
Stephen Larin |
Political Studies/Arts/Sci |
|
Tanya Joseph |
SGPS Appointee |
|
Alyssa Perisa |
AMS Appointee |
|
Dale Lackeyram |
Centre for Teaching and Learning |
|
Eleftherios Soleas (SAGAI) |
Office of the Provost/QHS/Education |
Teaching and Learning Subcommittee Meeting Minutes
Minutes of the Teaching and Learning AI Subcommittee Meeting
Date: September 26, 2025
Time: 9:00 AM – 10:01 AM
Location: Microsoft Teams virtual meeting
Present: James Fraser, Alyssa Perisa, Brian Frank, Christian Muise, Christine Coulter, Dale Lackeyram, Erica Friesen, Laura Shannon (delegate for Satish), Richard Reeve, Scott Whetstone, Scott-Morgan Straker, Stephen Larin, Susan Korba, Tanya Joseph
Regrets: Rosemary Wilson, Satish Kumar Kotha, Prameet Sheth
An editorial note on these minutes: the committee wishes to communicate that while these minutes are a summary of the discussions of the committee they should not be interpreted to mean that the committee members from various disciplines and roles from the community are a monolith. There are varied perspectives on each and every issue just as there likely is in the Queen’s community. These varied perspectives are valuable, respected, and important for considering and implementing AI at Queen’s in a manner that aligns with our community.
1. Call to Order and Welcome
The Chair opened the meeting, welcoming the committee members and acknowledging their expertise and commitment to shaping the university’s engagement with artificial intelligence (AI) in teaching and learning. The Chair emphasized the importance of harnessing collective insights from diverse disciplinary backgrounds to navigate both opportunities and challenges presented by AI technologies. Members were reminded that the meeting intended not only to share perspectives but also to establish actionable directions with long-lasting impact on the institution.
2. Introductions and Member Expectations
Each member introduced themselves, their roles, and their expectations for the committee’s work. Discussion included the balance between embracing AI’s potential to enhance education and preserving the integrity, critical thinking, and ethical standards foundational to the university's mission with varied perspectives.
-
Representatives brought perspectives from undergraduate and graduate student bodies, instructional design, academic success and writing support, educational technology research, academic integrity leadership, disciplinary faculty, and university governance.
-
Expectations communicated by members of the commitee included critically evaluating AI’s role within course design, supporting ethical and purposeful adoption of AI tools, fostering AI literacy, ensuring equitable access and inclusivity, and shaping transparent policies that meaningfully support both instructors and students.
-
Several members noted the rapidly evolving AI landscape and the need for flexible, scalable approaches that could adapt to changes and new insights.
-
Interest was expressed by several members in promoting AI as a means to strengthen human-to-human educational interactions rather than diminish them.
-
The committee agreed that it would begin brainstorming and collating priority initiatives that would be reviewed at our next meeting and reported up to QDPC.
3. Committee Purpose and Governance Structure
An overview was provided by the chair and the special advisor regarding how this subcommittee fits into the university’s governance framework for AI strategy as follows:
-
The Teaching and Learning AI Subcommittee functions as one of several subcommittee groups within the Queen’s AI Nexus, itself accountable to the Queen’s Digital Planning Committee (QDPC) and senior university leadership.
-
This structure is designed to integrate AI-related expertise and oversight across administrative layers, ensuring ethical, strategic, and value-aligned decisions.
-
This subcommittee will initially meet monthly, with the flexibility to establish smaller working groups addressing specialised areas within teaching and learning.
-
Communications and documentation will use Microsoft Teams and targeted email correspondence.
-
Members were asked to commit to a one-year term with an option to renew, recognizing the importance of continuity and ongoing engagement. It was recognized that for any number of reasons folks can elect to end their terms early and they would be replaced with another member of the Queen’s community. Ex officio members can be added to the subcommittee at the committee’s request.
-
A culture of transparency and collaborative decision-making was underscored as critical in fulfilling the committee’s mandate.
4. Review and Discussion of Terms of Reference
The committee reviewed draft terms of reference (ToR), with detailed discussion around scope, priorities, and workflows:
-
The ToR explicitly focus on the ethical and meaningful integration of AI in teaching and learning, seeking not to advocate for uncritical adoption but to identify contexts where AI use aligns with institutional values and pedagogical goals. It was agreed that this includes areas where AI is not to be used.
-
It was discussed that the committee’s role includes advising on policy development, sharing best practices, supporting educational initiatives, and facilitating cross-campus and external collaborations.
-
There was broad agreement that the committee will actively monitor emerging AI-related opportunities and challenges, coordinating with other Nexus subcommittees such as operations when issues overlap.
-
Important clarity was provided regarding tool evaluation: AI technologies proposed for campus use will undergo security assessments procedures (SAP), and this committee will provide input regarding pedagogical appropriateness, ethical considerations, and alignment with academic integrity as appropriate.
-
Discussion occurred around the idea that the committee should be empowered to recommend acceptance, restriction, or rejection of specific AI tools as appropriate.
-
The inclusion of student voices (undergraduate and graduate) was welcomed by members of the committee who deemed it vital for grounding recommendations in actual student experiences.
-
It was also discussed that relevant university offices such as academic integrity experts and privacy officers will be invited to participate on an ad hoc basis, ensuring comprehensive insights inform deliberations.
-
Members discussed the limitations of narrowly defining “AI” and instead preferred frameworks that focus on responsible delegation of academic tasks such as distinguishing authorized collaborative work from unauthorized delegation to external systems or agents.
5. Broader Perspectives on AI in Teaching and Learning
A rich dialogue occurred around AI’s transformational potential and associated risks:
-
Members emphasized the importance of recognizing inter- and intra-disciplinary differences in AI applicability, noting that pedagogical goals and ethical considerations vary widely across academic contexts.
-
Student representatives highlighted the need to address the specific challenges faced by graduate teaching assistants and professional students, as well as the ecological implications of AI use, ensuring that student-centered approaches remain central to committee recommendations.
-
Members reflected on historical technological adoption in education such as interactive whiteboards—which promised dramatic transformation but largely resulted in enhanced efficiencies and modest pedagogical shifts. Concern was raised about avoiding similar “technology hype” traps with AI.
-
The RAT framework (Replication, Amplification/Augmentation, Transformation) was cited by a committee member to characterise AI’s current phase at the university as primarily augmentative, rather than transformative
-
There was an expressed desire by a few committee members to mature beyond augmentation toward genuine transformation that fosters new modes of learning and critical engagement.
-
It was underscored that there is an urgent need to develop AI literacy that enables both instructors and learners to assess when AI tools are appropriate aids and when use risks trivializing or undermining learning objectives.
-
This can be in the form of widely available modules as well as longer form courses
-
Concerns were voiced by a committee member about the anthropomorphizing of AI systems by students, which can foster unrealistic expectations and dependencies, other members agreed with this concern
-
The descriptions of this anthropomorphising by users and AI’s sometimes sycophantic behaviour ranged from concern to ‘feeling creepy’
-
The potential for AI to either erode or enhance critical thinking and creativity was debated, with consensus that official guidance must emphasize preserving these core educational values.
-
Members highlighted disciplinary variability in AI's applicability; for example, some fields may integrate AI fluidly in problem-solving, while others confront stricter integrity challenges.
-
Noted by a committee member was the importance of transparency in AI use, including proper attribution and clear communication within syllabi and assignments about when and how AI may be employed. Other members of the committee agreed.
6. AI Literacy and Educational Resources
This discussion focused on the landscape of AI literacy initiatives and the committee’s role in resource development and coordination:
-
Several campus units have begun creating modular AI literacy programs and resources, including efforts from the library, ITS, the Centre for Teaching and Learning (CTL), and individual faculties.
-
The special advisor highlighted that there are opportunities to coordinate these disparate activities to form a cohesive, tiered AI literacy framework addressing multiple audiences: students (undergraduate and graduate), instructors, staff, and administrators.
-
The committee discussed and debated the necessity of continuous updating of AI literacy content, as rapid technological advances continually redefine capabilities and use cases.
-
Environmental impacts of AI were identified as an auxiliary topic intersecting with literacy and ethics; while primary discussion might occur in operations-focused subcommittees, this group expressed openness to addressing related educational aspects, such as raising awareness among instructors and students.
7. Academic Integrity
This issue was pervasive in other discussions and ideas as well, but is presented here as a sign of how seriously this issue is being taken and broadly discussed by the committee.
-
Members emphasized the development of policy frameworks that clarify sanctioned versus unsanctioned AI use, reinforcing responsible behavior.
-
Discussion acknowledged the challenge of upholding integrity without perpetrating punitive or prohibitive approaches that disregard AI’s educational utility.
-
Strategies to integrate AI responsibly into assessments while preserving authentic learning were considered imperative by several members of the committee.
-
It was widely agreed that AI literacy initiatives should explicitly include academic integrity components, educating students about ethical AI use.
-
The committee discussed the belief that teachers will require resources and support to adapt assessments and feedback mechanisms considering AI availability.
-
Members noted the possibility/probability that many AI models have been trained on data that they did not adequately license or procure ethically, meaning that models are themselves subject to serious ethical concerns that must be acknowledged.
8. Key Challenges and Considerations
Beyond policy and resource development, many members raised practical and philosophical challenges:
-
The necessity to address student and faculty anxiety or confusion about AI was emphasized; anxiety can stem from uncertainty regarding acceptable use or fear of losing academic autonomy.
-
Members highlighted risks of unequal access and digital divides affecting AI integration equity.
-
The committee discussed various complexities involved in technology evaluation: AI tools differ substantially in terms of data privacy, security, algorithmic bias, and environmental footprint.
-
Because AI literacy and policies must evolve, mechanisms for regular review and updating were widely believed to be crucial
-
Members suggested the creation of an ongoing feedback loop involving students and faculty to monitor AI’s impacts and emerging issues.
-
Interdisciplinary dialogue was encouraged to balance sometimes divergent priorities, such as innovation vs traditional pedagogical norms.
9. Next Steps and Action Items
-
Resource Review: Committee members are requested to review a shared background document compiling existing campus AI literacy resources and policies prior to the next meeting.
-
Initiative Proposals: Members invited to submit up to five AI-related initiatives they believe warrant focus, with a goal to prioritize actionable items for upcoming agendas.
-
Collaboration: The Chair and SAGAI will coordinate invitations to relevant university representatives for future meetings, including academic integrity officers, privacy office personnel, and sustainability experts.
-
Community of Practice: Engagement with the CTL-led community of practice on generative AI in education will be formalized, providing the committee with access to emerging best practices and grassroots innovations.
-
Communications: Plans include developing clear, accessible messaging frameworks on AI use for faculty, staff, and students, potentially integrated into orientation and professional development.
-
Environmental Considerations: Environmental impact discussions that intersect with teaching, learning, and AI literacy will be facilitated when appropriate.
-
Policy Development: The committee will begin crafting recommendations on AI citation standards, academic integrity practices, and appropriate tool usage, grounding efforts in ethical principles and institutional values.
-
Meeting Schedule: Monthly meetings will continue for the first year, with assessment of frequency and format after initial cycles.
10. Other Business
-
The Chair reiterated appreciation for the wide-ranging and thoughtful contributions.
-
Members acknowledged the fast-moving nature of AI developments and the need for the committee to remain adaptable and informed.
-
The committee recognized the importance of maintaining a student-centered focus and ensuring equitable and inclusive adoption of AI.
-
The Chair noted that meeting materials, recordings, and minutes will be distributed promptly to ensure transparency and ongoing engagement.
11. Adjournment
The Chair adjourned the meeting at approximately 10:01 AM, thanking members for their active participation and commitment. The next meeting will be scheduled for approximately one month from this date, with meeting details to be circulated in advance.
Minutes prepared by: Eleftherios Soleas. Minutes created from the Teams meeting transcript, attendees consented to its use. Queen’s LibreChat AI tool used to format initial themes and grammar check, expanded and revised by Special Advisor Generative AI, submitted to Chair for review, and then to be revised and approved by the subcommittee members.
Date of distribution: To follow shortly after meeting
Next Meeting: To be scheduled for approximately one month later; agenda and links to be provided in advance.
Meeting Minutes: Teaching and Learning AI Subcommittee – Second Meeting
Date: October 27, 2025 1-2PM
Present: James Fraser (Chair), Eleftherios Soleas (SAGAI), Christian Muise, Brian Frank, Richard Reeve, Scott Whetstone, Scott-Morgan Straker, Susan Korba, Erica Friesen, Prameet Sheth, Stephen Larin, Alysa Perisa, Dale Lackeyram, Dawood Tullah (delegate for Tanya Joseph)
Regrets: Rosemary Wilson (teaching conflict)
Agenda Items:
1. Welcome and Introductions
-
Chair welcomed attendees and invited introductions from members who missed the first meeting.
-
Purpose of the meeting was reaffirmed: to continue developing priorities for the subcommittee and finalize foundational documents.
2. Communications Strategy
-
Discussion on how open and transparent the subcommittee’s communications should be.
-
Chair proposed an open model where members act as conduits to their units.
-
Committee agreed to publish minutes on the AI Governance website once approved
-
Agreement to review first set of meeting minutes to fully capture the diversity of perspectives.
3. Finalizing Terms of Reference
-
Members reviewed the draft Terms of Reference (ToR) with no further revisions suggested.
-
ToR will also be made publicly available on the AI website.
4. Themed Brainstorming and Priority Setting
The ideas from these breakout rooms have not been reviewed by the committee as a whole and are meant to indicate the variety of thought from the breakout rooms.
Breakout Room 1: Development and Evaluation of AI Tools
-
Ideas proposed:
-
Inventory of AI Pilots and Tools: Create a centralized repository of ongoing AI initiatives for cross-disciplinary learning. Editorial: aligned with a similar idea from breakout room 2.
-
Support for Evaluation: Recommend grant or institutional support for instructors testing AI tools, including help with ethics and assessment.
-
Long-Term Vision, not immediate: Explore development of a “Co-Intelligence” AI tool acting as a resource for students. This item induced strong interest in discussing ideas and concerns from members of the committee.
-
Breakout Room 2: Faculty and Student Development and Support
-
Ideas proposed:
-
Inventory of AI Literacy Initiatives: Maintain a dynamic, public-facing list of efforts to build AI literacy as well as AI resources. Aligned with an idea from group 1.
-
Guidelines for AI Use: Develop generalizable, future-proof guidance (by focusing on concepts rather than specific tools) on when and how AI can be used in learning in various disciplines.
-
Values-Based Communication: Create messaging that clearly outlines both benefits and risks of AI, aligned with Queen’s values. Should be the foundation of consistent messaging across disciplines and considering a diversity of perspectives.
-
Breakout Room 3: Strategic Frameworks, Governance, and Ethics
-
Ideas proposed:
-
Transparency in AI Use: Advocate for policies requiring disclosure of AI use by instructors (e.g., syllabus creation, grading).
-
Differentiated Guidance: Recognize and address the unique needs of undergraduate, graduate, and professional students.
-
Academic Integrity and Freedom: Clarify expectations around AI use while respecting academic freedom; promote consistency across courses.
-
Educational Technology Review: Update policies to ensure AI tools meet criteria for accessibility, equity (I-EDIAA), and environmental sustainability.
-
Curriculum Development: Revise existing policies to guide in-class research and AI use in instructional activities.
-
5. Actions & Next Steps
-
Special Advisor will revise the meeting minutes to provide both detailed and summary versions highlighting the variety of perspectives
-
Members are encouraged to revisit and update the brainstorming document with new ideas or refinements.
-
Next meeting will review updated minutes and continue priority development.
Meeting Minutes: Teaching and Learning AI Subcommittee – Third Meeting
Date: November 24, 2025
Time: 1:30 PM – 2:30 PM
Present: James Fraser (Chair), Eleftherios Soleas (SAGAI), Christian Muise, Brian Frank, Scott Whetstone, Scott-Morgan Straker, Satish Kumar Kotha, Richard Reeve, Susan Korba, Stephen Larin, Prameet Sheth, Surabhi Velagala (standing in as delegate for SGPS for Tanya Joseph) Erica Friesen, and Dale Lackeyram
Regrets: Rosemary Wilson, Alyssa Perisa
1. Welcome and Introductions
-
The Chair welcomed attendees and asked for introductions from members attending their first meeting. The agenda that had been circulated was reviewed. The chair quickly reviewed some external examples of where the question of GenAI in T&L is being discussed: HEQCO Consortium and GenAI, Mark Daley’s presentation at CAGS 2025.
2. AI Literacy Resources and Discussion that came about from that briefing
-
SAGAI highlighted resources developed around AI literacy initiatives from the CTL, Library, SASS, and from ITS.
-
The committee also discussed recent developments in AI resources on the university’s website including frameworks for ethical considerations and prompt engineering. Discussions occurred about feedback from various groups about necessary literacy resources.
-
The discussion shifted to Student Academic Success Services aim to support students in developing their own writing voice, focusing on encouraging independent thought. Discussions focused on fostering an environment where students can express concerns about AI usage safely.
-
Members initiated a discussion regarding the university’s stance on AI detection tools. It was shared that Queen's policy does not endorse the exclusive use of AI detectors due to high false positive rates, asserting that 20% of cases could wrongfully accuse students of misconduct. These discussions will be ongoing.
-
Members emphasized the importance of academic freedom concerning the use or non-use of AI tools by instructors. Each instructor as per the collective bargain has the autonomy to decide their approach, and it was suggested that the subcommittee could provide reminders concerning existing policies for ethical use.
3. Facilitated Discussion: Policy and Framework and Development.
Editorial note: the committee was presented a slide deck by Dale Lackeyram including this diagram and then began a discussion that covered the various topics as described below
Clarity and Consistency in Policies: Dale (Director of the CTL) was invited to discuss his perspectives as the moderator of the breakout room on policies and cross-institutional perspectives. He (on behalf of the breakout group he led) emphasized the breakout groups desire for clarity in the guidelines and policies surrounding AI use, especially for instructors and students. He noted that many current policies tend to focus primarily on undergraduate students, which may not adequately address the diverse experiences and needs of graduate students or those in professional programs. This inconsistency can lead to confusion and uneven enforcement of academic integrity principles.
Full Disclosure Requirements: The group discussed the importance of full disclosure by instructors regarding their use of AI in course design and assessment methods. The group highlighted the necessity for transparent communication about how AI tools might be utilized to create course content or evaluate students. Establishing clear expectations in syllabi can foster a better understanding and mitigate potential misunderstandings related to academic integrity.
Guidance for Instructor and Student Interaction: It was discussed it is critical to provide instructors with comprehensive guidance on how to effectively integrate generative AI into their curricula while maintaining academic standards. This includes recognizing the different types of learning outcomes they aim to achieve and ensuring that AI tools are used to enhance rather than undermine those goals.
Ethical Considerations and Academic Freedom: The conversation touched on ethical implications of AI usage in education. The committee discussed the delicate balance between maintaining academic freedom for instructors and ensuring that ethical guidelines are followed. It was stressed that there is a need for the subcommittee and the university to respect instructors' autonomy while also providing a framework that promotes equitable and responsible use of AI tools across all faculties.
Equity, Inclusion, and Access: Discussions also revolved around ensuring that the adoption of AI technologies promotes equity and inclusion within the educational environment. Dale emphasized that the selection and implementation of AI tools should align with the broader institutional commitments to accessibility and diversity, ensuring that all students have equitable access to learning resources.
Next Steps for Developing Policies: The subcommittee will recommend reevaluation existing policies and consider the development of a set of guiding principles for the responsible use of AI in educational settings. This would serve to streamline efforts across faculties and offer a cohesive strategy for addressing AI-related challenges.
4. AI Usages in Curriculum and Assessment
-
A conversation took place about varied forms of AI utilization within courses. It was noted that instructors might restrict or provide guidelines regarding AI’s use, and the committee discussed the implications of these decisions on both students and broader educational policies.
-
SAGAI proposed launching an "AI in the Wild" survey to assess how instructors are using AI across the university. The goal was to gather quantitative data that could inform adjustments to policies and aid in sharing successful practices among faculties.
-
It was agreed that the survey would be shared with the chair and a volunteering subcommittee member for a review to ensure neutrality before being administered.
Actions & Next Steps:
-
Survey Creation: Eleftherios Soleas to coordinate with Chair and Stephen Larin to draft and review, before distributing an "AI in the Wild" survey to instructors across faculties.
-
Draft priorities collected and shared with committee for ranking and further consideration.
-
SAGAI to share both RISE module from SASS and the guidelines from Political Studies for consideration. All members are asked to review both of these before our next meeting.
-
Guidelines Development: Discussion to continue on how best to create useful guidelines around AI use, particularly in relation to academic integrity and student support.
-
Resource Compilation: The committee will continue to compile an inventory of educational resources and best practices regarding AI literacy across the university.
Adjournment:
Chair concluded the meeting, thanking attendees for their contributions and encouraging ongoing dialogue outside of the formal meeting structure.
Meeting Minutes: Teaching and Learning AI Subcommittee – Fourth Meeting
Date: December 15th, 2025
Time: 2:00 PM – 3:00 PM
Present: James Fraser (Chair), Eleftherios Soleas (SAGAI), Christian Muise, Brian Frank, Scott Whetstone, Scott-Morgan Straker, Satish Kumar Kotha, Susan Korba, Stephen Larin, Prameet Sheth, Rosemary Wilson, Erica Friesen, and Dale Lackeyram
Regrets: Richard Reeve, Alyssa Perisa, Tanya Joseph
Agenda Items:
1. Welcome and Introductions
-
The Chair opened the meeting, welcoming attendees and introducing new committee member Stephen Thomas, a faculty member from the Smith School of Business. He teaches analytics and AI, and previously launched a Master's program on Management of AI
-
Dr. Rosemary Wilson from the Faculty of Health Sciences introduced herself, mentioning her roles as a professor in the School of Nursing and Department of Anesthesiology, as well as her teaching focus on advanced statistics and philosophy.
2. External Context and Developments
-
James shared a recent initiative from Purdue University that now requires basic AI competency for all undergraduates as a graduation criterion. This raises the question of what constitutes AI competency.
-
He also mentioned the Alan Turing Institute in the UK, which is developing a multifaceted approach to AI that embraces cultural complexity and frameworks for collaborative human-AI systems. This reflects the importance of thinking ahead regarding the evolving role of AI in education.
-
Additionally, Canadian Institute for Advanced Research (CIFAR) is supporting multiple AI institutes across Canada, providing resources relevant to post-secondary education including one that Queen’s is joining called AMMI.
3. Review of AI Literacy Module
-
The committee reviewed the AI literacy module developed by Student Academic Success (SAS) and invited feedback on its contents. Susan Korba detailed that it is embedded in the Academic Skills 101 course for first-year students and aimed at enhancing their understanding of AI.
-
Feedback was gathered, highlighting the module’s strong points, such as its focus on equity considerations regarding AI and issues of bias, particularly in image generation technologies.
-
Suggestions for improvement included emphasizing the importance of understanding the process over the end product, ensuring the module addresses the implications of using AI versus engaging in critical thinking.
4. Priorities for AI Implementation and Guidelines
-
The committee discussed a framework for developing resources and guidelines that not only cover baseline literacy around AI but also adapt to the specific contexts of different faculties. Discussion highlighted the importance of having a process for evaluating and adopting AI tools.
-
Committee members reflected on the need for a structured approach to ensure that the various uses of AI tools align with academic integrity and pedagogical goals.
-
It was suggested that an inventory of tools and resources be created to aid departments in finding relevant and effective technology for their instructional practices.
5. Discussion on Policy Development
-
Stephen Larin shared insights on the political studies AI policy drafted in response to increased concerns about academic integrity related to AI use. He emphasized the importance of addressing the teaching and learning processes reinforced by the policy rather than merely focusing on compliance.
-
The group recognized that while academic integrity is a significant concern, there is also a place for fostering an understanding of AI’s educational potential, and this perspective should be reflected in policy formulations.
6. AI in the Wild Survey
-
Terry presented the "AI in the Wild" survey draft that aims to gather data regarding AI usage among faculty and staff at Queen's. The survey will assess what tools are being used and the contexts in which they are applied. There was some concern that the information would have a limited shelf life, but other committee members felt the results would have value.
-
Members discussed the challenges of keeping the survey relevant due to the fast-evolving nature of AI technologies but acknowledged that collecting this data would provide a valuable snapshot of current practices.
7. Next Steps and Future Meetings
-
The committee agreed to review the survey responses at the next meeting.
-
Susan invited members to send their feedback directly to her for incorporation into future revisions of the Academics 101 module.
-
Committee suggested connecting departments interested in developing AI policies with those who have already done so, potentially creating a resource-sharing arrangement.
Actions & Next Steps:
-
Feedback Collection: Members are encouraged to provide specific feedback on the SAS AI literacy module to Susan Korba, who is collating this information.
-
Survey Launch: The “AI in the Wild” survey will be sent to faculty and staff in January for data collection on AI usage across departments.
-
Policy Collaboration: Departments looking to draft or refine their AI policies may reach out to Stephen Larin for insights and examples from the political studies department.
-
Next Meeting: The committee will reconvene in January to discuss the survey results.
Adjournment:
James Fraser concluded the meeting, thanking attendees for their participation and encouraging a productive break before reconvening in the new year.
Teaching and Learning AI Subcommittee Meeting Minutes
Date: January 12, 2026
Time: 3-4 pm
Location: Virtual
Present: James Fraser, Eleftherios Soleas, Prameet Sheth, Stephen Thomas, Christian Muise, Brian Frank, Susan Korba, Tanya Joseph, Dale Lackeyram, Scott Whetstone, Scott-Morgan Straker, Richard, Stephen Larin
Absent: Rosemary Wilson, Alyssa Perisa
1. Opening and Purpose of Meeting
-
Chair noted the committee is moving from exploration and information sharing into an action phase focused on producing concrete policy recommendations for the university.
-
A “working group” model was proposed to enable deeper progress in smaller teams rather than full committee meetings.
-
Members were advised that this will slightly increase time commitment, estimated at approximately three hours per month, but will accelerate progress and outcomes.
-
The goal is for each working group to develop a concise set of recommendations or a short white paper to be brought back to the full committee and refined before being circulated to senior leadership.
2. External Scan and Environmental Update
-
Chair highlighted external work at the University of Toronto on an Artificial Intelligence Virtual Tutor Initiative, emphasizing sandboxed AI environments aligned with instructor-curated content.
-
Emerging research suggests improved learning outcomes when large language models are properly trained on domain-specific data rather than general web content.
-
The field is rapidly evolving, with active scholarship and strong evidence generation underway.
3. Updates and Announcements
3.1 AI in the Wild Survey
-
The survey launch has been postponed to February 4 at the request of HR to avoid overlap with an institutional employee experience survey.
-
Delaying is expected to improve response rates.
3.2 Alberta Machine IntellegenceIntelligenceAMII) Partnership
-
Queen’s has been accepted into the AMII consortium and now has free access to “Module Zero,” a three-hour foundational AI and machine learning learning resource.
-
The module covers:
-
How AI works
-
Ethical considerations
-
Critical thinking impacts
-
Limits of AI as a substitute for human development
-
The resource will be placed in onQ for committee review.
-
Members were encouraged to review the module internally before broader dissemination within Queen’s.
-
Technical constraints limit access outside Queen’s.
4. Formation of Three Working Groups
The committee reviewed and refined the mandates of three working groups. Members were asked to indicate their preferences for participation at the end of the meeting.
Working Group 1
AI Literacy: Defining What AI Literacy Should Look Like at Queen’s
Purpose: To define what AI literacy means at Queen’s and recommend how the institution ensures all learners and educators achieve foundational competence.
Key Discussion Themes
Definition of AI Literacy
-
Establish a shared institutional definition of AI literacy grounded in trusted external frameworks rather than reinventing new ones.
-
Identify core abilities, knowledge, and ethical competencies.
-
Avoid perfectionism; prioritize a practical, communicable framework that can be adopted quickly.
Differentiated Audiences
-
Define AI literacy separately for:
-
Students
-
Instructors
-
Staff
-
Acknowledge overlapping competencies but recognize role-specific expectations.
Embedding AI Literacy in Curriculum
-
Explore how AI literacy can be systematically embedded rather than treated as optional or ad hoc.
-
Consider whether institution-wide expectations or graduation-level competencies should be recommended.
-
Identify who needs to be involved institutionally to make this scalable.
Case-Based Learning
-
Strong support for using case studies showing:
-
Appropriate AI use
-
Inappropriate AI use
-
Include disciplinary and policy context to support interpretation.
-
Highlight ethical versus unethical decision-making to help learners recognize boundaries.
Measuring Success
-
Identify what evidence would demonstrate successful AI literacy:
-
Competency indicators
-
Assessment approaches
-
Institutional uptake
-
Consider what resources, infrastructure, and expertise are required.
Institutional Recommendation Orientation
-
The group is expected to recommend processes, structures, and stakeholders rather than directly building all materials themselves.
-
Outputs may include one or more short recommendation documents to the Provost.
Working Group 2 Student Policy: Student AI Guidelines for Teaching and Learning
Purpose: To examine gaps in current student-facing policies and develop future-proof guidance for ethical and effective AI use in learning.
Key Discussion Themes
Review of Existing Policies
-
Begin by mapping what currently exists across Queen’s.
-
Identify gaps, inconsistencies, and areas where guidance is unclear or missing.
-
Recognize that AI guidance is currently embedded across multiple policies (e.g., academic integrity) rather than as a standalone policy.
Future-Proof, Concept-Based Guidance
-
Emphasize principles and concepts rather than tool-specific rules.
-
Develop guidance that remains relevant as technologies evolve.
Transparency and Disclosure
-
Strong support for requiring disclosure of AI use by students.
-
Transparency seen as enabling responsible use and reducing underground or hidden practices.
-
Consider how disclosure mechanisms can be simple, consistent, and discipline-sensitive.
Consistency Across the Student Experience
-
Address student confusion caused by inconsistent expectations across courses and instructors.
-
Aim for as much consistency as possible while respecting disciplinary nuance.
-
Avoid extremes where any AI use is treated as inherently misconduct.
Universal vs Disciplinary Boundaries
-
Recognize that certain breaches (e.g., plagiarism, misrepresentation of authorship) are universal.
-
Allow space for discipline-specific interpretation while maintaining institutional standards.
Programs vs Courses
-
Discussion emphasized the importance of program-level expectations, not only course-level rules.
-
Program-level coherence may better support developmental progression and accreditation requirements.
-
Consider alignment with degree-level expectations and cyclical program review processes.
Case Studies
-
Use case-based examples illustrating aligned and misaligned AI use in real academic contexts.
Academic Integrity Alignment
-
Ensure strong coordination with academic integrity governance and expertise.
Working Group 3: Faculty-Focused Policies, AI Knowledge, and Transparency in Teaching and Learning
Purpose: To develop guidance and supports for instructors related to AI literacy, transparency, and responsible instructional use.
Key Discussion Themes
Instructor Knowledge and Capacity Building
-
Instructors require targeted resources and skills development to use AI responsibly and confidently.
-
Transparency is only meaningful if instructors understand the tools and implications.
Transparency in Teaching Practice
-
Advocate for policies requiring disclosure of instructor AI use, including:
-
Syllabus creation
-
Assessment design
-
Grading workflows
-
Promote awareness among students of how AI influences educational practices.
Standardized Disclosure Practices
-
Consider developing a standard AI disclosure template for syllabi.
-
Normalize transparency across the institution rather than leaving it to individual discretion.
Distinction from Student Policy
-
Group 3 focuses on instructor-facing guidance, whereas Group 2 focuses on student-facing policy.
-
Overlap is expected, but the audiences and implementation mechanisms differ.
5. Working Group Participation and Next Steps
-
Members were invited to indicate interest in one or more working groups via chat.
-
Working groups will meet independently before the next full committee meeting. Scheduling doodle sent tomorrow.
-
Each working group will bring back draft recommendations or early outputs for discussion.
-
Resources from AMII will be shared for review.
-
Next full committee meeting will include reports from each working group.
6. Adjournment
-
Meeting adjourned with appreciation for strong engagement and constructive discussion.
Queen’s Senior Leadership Team
The Senior Leadership Team receives recommendations from the Queen’s Digital Planning Committee and implements the strategic direction set by the Board of Trustees through its Finance, Assets, and Strategic Infrastructure Committee.
It serves as the senior administrative body responsible for university wide digital and AI priorities.
When responsibilities intersect with the academic mission, including teaching, learning, and research, this work proceeds in parallel with Senate’s academic authority.
Broader University Governance Context
AI governance operates within the University’s broader governance ecosystem, which includes both academic and administrative oversight structures. On the academic side, Senate and its committees hold responsibility for academic policy, academic standards, and the integrity of teaching, learning, and research. AI-related initiatives that affect academic work are therefore situated within Senate’s governance framework.
In parallel, AI governance also aligns with established administrative and enterprise oversight bodies, including the Data Governors Council, the Data Trusteeship Committee, and units responsible for institutional compliance. It is informed by University-wide frameworks such as the Enterprise Risk Management Framework, the Cybersecurity Program, Internal Audit, and the IT Change Advisory Board.
AI Oversight