Skip to main content

March 2026 OCUL Newsletter

A white background with grey dotted swirly lines. On top is a chartreuse text box with bold black text that reads “OCUL News.” Under the text box is the subtitle “Newsletters and updates from the Ontario Council of University Libraries”.

AI and Academic Libraries: Five Librarians Reflect on Risks, Realities, and the Road Ahead

Ever since ChatGPT stormed onto the academic scene in 2022, post-secondary institutions and their libraries have grappled with how to introduce ever-changing AI technologies into research, teaching, and learning services. With ethical dilemmas that question labour issues, intellectual property, data security, academic integrity and more, it’s no wonder AI is such a fraught topic in the world of librarianship. Perhaps, especially, since the profession fiercely upholds core values of intellectual freedom, privacy, and the public good.

To tease out the current frictions between AI and academic libraries, we gathered five librarians for a lengthy email discussion on the risks and realities of these evolving technologies. Our thanks to Yoo Young Lee (University of Ottawa), Melanie Parlette-Stewart (University of Guelph), Joël Rivard (Carleton University), Mark Robertson (Toronto Metropolitan University) and Kari D. Weaver (OCUL and University of Waterloo) for taking the time to share their insights. 

OCUL: Let's start with a question that feels intrinsic to our conversation: Do you think academic librarianship and AI are at odds with each other?

Rivard: I don’t believe academic librarianship and AI are at odds, but their alignment depends on ensuring AI is shaped by responsible use. This is where our professional values position us to positively shape AI in academia. AI tools aren’t here to replace our expertise. With a human in the loop, AI becomes another instrument we can reach for when it helps, while we maintain the informed judgment and standards our users rely on. It’s augmentation, not substitution. Librarians also have a natural role to play in helping our communities navigate AI. Students and faculty are already experimenting with these tools to find information, summarize readings, or brainstorm research ideas. That creates an opportunity for us to step in and contextualize things: to explain where AI tools fit in the research process, when use academic databases, and how to evaluate the content an AI model generates. In many ways, it’s an extension of the information literacy work we’ve always done. And as AI becomes more visible in academic life, it highlights the values that have always grounded our profession: privacy, access, transparency, and critical engagement with information to name a few. The speed at which AI tools are being developed means not all of those values make it into the design conversation. That’s where our advocacy matters. We can push for tools that meet accessibility standards, respect user data, and make their processes transparent.

Lee: I don’t see academic librarianship and the use of AI as being at odds with each other, particularly in my current area of practice, metadata and cataloguing. Metadata and cataloguing work has always evolved alongside technology to meet users’ needs and support teaching, learning, and research by facilitating access to collections. I see clear opportunities for the use of AI in metadata and cataloguing, particularly for unique and distinct collections that lack adequate metadata. In many cases, describing these materials manually at scale is not feasible. The question, then, is whether and how we can apply AI to generate and enhance metadata in responsible and meaningful ways.

At the same time, the use of AI in metadata and cataloguing raises important questions about quality, bias, transparency, and professional judgment. The principles of “garbage in, garbage out” remains highly relevant, particularly when training data reflects historical and structural biases. For example, systems such as the Library of Congress Subject Headings have been shaped by predominantly white Western and colonial perspectives. In addition, the training of AI models often relies on large volumes of human-labeled data, frequently produced by low-wage workers in countries such as Kenya and Colombia.

As academic librarians, we have consistently adopted new technologies through a critical lens, showing careful attention to ethical concerns around equity, transparency, accountability, and the responsibility on the knowledge systems. In practice, I believe that this means approaching AI in metadata and cataloguing with intention and care. Rather than adopting AI tools just wholesale, we must define clear use cases, understand the limitations, and maintain human oversight throughout the workflow. Used this way, AI can be an opportunity for us to support scale and consistency while preserving the central role of our professional work.

Parlette-Stewart: Joël, I particularly agree with your framing of AI as augmentation rather than substitution, and the natural extension of our information literacy work. The reality is our students are already deeply engaged with these tools. This presents both a challenge and an opportunity for those of us working in learning and curriculum support. In my area, which spans information literacy, writing support, study skills, and access services, AI is showing up in increasingly visible ways. Students are asking about using AI for literature reviews, they're incorporating it into their writing process, and they're using it to help them study. Our data literacy specialists are working with students who've used AI to interpret datasets. These are challenging conversations as often students as some students may not be aware of the consequences of using AI for certain tasks and the nuances of how they work.

I think we have an urgent need for AI literacy that is situated within specific learning and disciplinary contexts. Supporting and helping students understand not just how to use these tools, but when, why, and under what conditions they're appropriate or effective. I think that this is really just an extension of the information literacy work we've always done: evaluating sources, understanding information production, recognizing bias and limitations, and making good choices about what tools to use.

At [the University of] Guelph, we've been working closely with our Office of Teaching and Learning to navigate the intersection of student support and faculty development around AI literacy. It's been essential to build those relationships and coordinate our approaches, because students are experiencing AI literacy (or the lack thereof) and faculty are also looking for support. There is a huge range of experience and comfort in this area. Within my own department, we're currently developing learning outcomes for AI literacy across our academic support services. The goal of this work is to help us have more informed, strategic conversations with faculty and other stakeholders about how to integrate AI literacy into our services and support courses and programs. At the individual level, it's still very much a negotiation; every instruction session we design with faculty involves discussion about whether and how to address AI tools.

Like Joël, I see our role as helping our communities navigate these tools critically and responsibly. I'd add that we also need to be advocates for our not just a role in this work but also for how we might bring an approach that recognizes that not all students come to us with the same baseline digital literacy or critical thinking skills. We know that not all AI tools are designed with accessibility, privacy, or academic integrity in mind. Our values around equity, access, and user-centred service will be a strength as we do this work. I don't see academic librarianship and AI use as being at odds, but I do think our success in this work will depend on how well we can advocate and share our expertise and if we can maintain our core commitment to critical, ethical, and equity-minded practice as these tools are evolving so rapidly. 

OCUL: Why do you think some libraries and/or librarians have been open to AI technologies?

Lee: From my experience, openness to AI is influenced by colleagues, leadership, and an organizational culture that supports experimentation and dialogue with critical thinking. When I began my first maternity and parental leave, ChatGPT had just been introduced. By the time I returned from my second maternity and parental leave, open discussions about AI, particularly generative AI, were already very active at my library. My colleagues were sharing their perspectives and describing how they were addressing the topic at Library Council. For example, I learned a great deal from Majela Guzman, Research Librarian, about the impact and implications of generative AI on her teaching and the ways it informs her approach to her work. I also learned a lot from Mish Boutet, Digital Literacy Librarian, particularly about the ethical considerations and environmental impacts of AI. Ongoing conversations with my supervisor, Liz Hayden, Associate University Librarian, Content and Access, were incredibly helpful. Her insights supported me in navigating this rapidly evolving landscape as I returned from leave. These open and thoughtful discussions sparked my curiosity. They encouraged me to explore AI technologies with a balanced and critical lens, neither uncritically embracing nor dismissing them, but asking how they might meaningfully support our work. Around the same time, a working group at my institution on generative AI was formed, and the group developed guiding principles to frame how we explore and engage with these technologies. Reflecting on my own experience, libraries may be more open to AI when there is space for collective learning and experimentation is grounded in shared professional values.

Weaver: I think librarianship as a profession tends to attract individuals who are intensely curious by nature. AI technologies are a natural extension of that curiosity as they can spark new questions, enhance discovery, and help us organize and understand information in more nuanced ways. For many librarians, AI offers potential to improve access, support new research workflows, and expand the ways people interact with information. For me personally, 2026 is my 20th anniversary working in academic libraries. In that time, I’ve experienced fundamental shifts from the rise of the natural language-based search engine to the introduction of social media as a news and information tool, to AI. While I’m not sure anyone has the definitive answer on how to navigate AI in libraries, or higher education, or the workplace of the future, I do think that innate curiosity and interest in information behaviour encourage interest and experimentation for many. It also seems less threatening when you consider AI in the context of being the next information shift or evolution rather than the catalyst of impending doom! Echoing Yoo Young’s point, collective learning and experimentation are essential for forward momentum, whether that happens at an institutional, consortial, or global level in libraries.

Rivard: I feel as though some librarians have been open to, or have embraced, AI technologies because the conditions that support technology adoption are largely in place. Drawing on the literature of the Unified Theory of Acceptance and Use of Technology and more recent extensions that include emotional factors, this openness can be understood through performance expectancy, effort expectancy, social influence, facilitating conditions, and curiosity. At my institution, interest in AI has grown largely out of curiosity and user demand. Faculty and students are already asking questions about generative AI, and for many librarians, understanding these tools feels like part of staying relevant and responsive. On the technical side of things, AI tools are also relatively easy to experiment with, which lowers the barrier to entry and makes low stakes exploration possible. Just as important is the institutional environment as was mentioned previously by others. The library and university have created space for experimentation through access to tools, time to explore, and opportunities to share experiences with colleagues. There have been multiple formal and informal gatherings that have enabled colleagues to discuss, test, and reflect on AI tools. As librarians talk openly about what works, what doesn’t, and what feels uncomfortable, AI becomes a shared professional conversation. In short, some librarians are engaging with AI not because they see it as a solution, but because the conditions for thoughtful, critical exploration are in place.

Parlette-Stewart: I'm definitely finding myself agreeing with many of the themes you've all raised. I agree with Kari that librarianship tends to attract curious people. That curiosity is definitely something that drew me to the profession. I also think Joël's point about the conditions that support technology adoption is important. Curiosity alone isn't enough; we need the institutional structures and culture that allow for thoughtful experimentation. At Guelph, we've tried to create a variety of opportunities for engagement with AI. Our library has an active AI committee that led a full-day staff professional development session and now facilitates an ongoing community of practice for library staff. This peer learning space has been invaluable for sharing different experiences and discussing big issues related to AI. Beyond the library, I've been co-leading campus communities of practice for teaching and learning with our Office of Teaching and Learning, and I facilitated AI and Academic Integrity campus conversations to create space for facilitated discussion across different campus roles and perspectives. I think that these structures really matter because, as Yoo Young described, openness to AI is deeply influenced by colleagues, leadership, and organizational culture. 

At Guelph, the work has been strongly supported and encouraged by our senior leadership team, which has made it possible for staff to invest time in developing skills and exploring applications, however, I think we're still navigating how to best do this work without it feeling like we're doing this off the side of our desks. I think it's a bit of a shift as we start to see that this IS the work now. I'm also really inspired by colleagues across campus who are leaning into their own discomfort with these technologies. I've worked with faculty who aren't necessarily comfortable with AI or consider themselves experts but are still grappling with how AI changes their teaching and experimenting with integrating into the classroom. It's been great to have the Centre for Advancing Responsible and Ethical Artificial Intelligence (CARE-AI) on campus as they're active in community discussions and bring great experience and expertise. These partnerships and relationship-building opportunities have been energizing and have reinforced that this work is more meaningful when it's collaborative. I think that the openness is driven by both curiosity and pressure. Staff are genuinely interested in the pedagogical possibilities but there's also the practical reality that students are already using these tools and the changes in this area are happening at a pace that sometimes requires us to move faster than feels entirely comfortable. In learning and curriculum support specifically, I think we're experiencing both sides of this. The curiosity is real, but so is the urgency. We've recognized a campus need and an opportunity for the library to play an important role in helping the institution navigate this moment thoughtfully. People are interested, the campus needs support, and we have expertise to offer. We need curious professionals, supportive leadership, collaborative structures for learning, and the institutional permission to experiment and sometimes fail. 

I also think this moment requires a certain comfort with ambiguity. Many people are understandably worried about getting things "right," but the reality is that the landscape is shifting so rapidly that we're going to get some things wrong and need to evolve our approaches as we learn. Leaning into that discomfort, rather than waiting for certainty, feels key.

OCUL: As tech continues to rapidly evolve, have your feelings towards AI changed at all over the past two to three years? 

Lee: After reading Kari and Joël’s responses, I realized why academic librarianship has always been so attractive to me! It aligns perfectly with my curious nature. In addition, through my participation in CARL’s Flourish: BPOC Academic Library Leadership Program, I’ve had the opportunity to learn more about myself in relation to others. I’ve come to see that I am not only deeply curious, but also highly practical in how I approach my work. In that sense, I see new technologies as opportunities. My instinct is to explore them right away and ask: how might this apply to area of my work? Is there a better and more effective way to do what we’re already doing? At the same time, I know that I can become easily excited about new possibilities and, in that enthusiasm, overlook important considerations.

My first exposure to AI was watching the Go match between Lee Sedol and AlphaGo in 2016. I was shocked not because Lee Sedol lost four out of five games, but because I had never seen Go played the way AlphaGo played it. That moment sparked a deep curiosity in me about AI, but I didn’t know how this can be used other than scripting to automate repetitive processes. Since then, as I’ve read more and engaged with ethical discussions (love OCUL’s AI book club!), I’ve come to believe that we have a responsibility to use AI thoughtfully and responsibly. I'm also fortunate to work with colleagues who bring sharp and critical perspectives. Their questions and cautions help balance my enthusiasm and push me to think more carefully about how, when, and why we adopt new technologies. I still remember the work of Dr. Latanya Seeney on discrimination in online ad delivery, which powerfully demonstrated that technologies are never neutral. I am still excited about new possibilities with AI, but this reminds me that curiosity and excitement should be paired with accountability.

Rivard: Over the past two to three years, my feelings about AI have shifted quite a bit. If I had to sum it up, I’d say it’s been a bit of a rollercoaster. Early on, my response was mostly driven by curiosity and excitement. I tend to gravitate toward new technologies, especially ones that invite hands-on experimentation, so I was eager to see what AI tools could actually do. I began exploring the tools as potential supports for teaching and learning, as well as for streamlining some of the administrative, day to day parts of my work. Like many others, I was caught up in the early optimism. It felt as though AI might finally take some of the more routine tasks off our plates and free up time for higher value work. As I spent more time experimenting, though, that excitement gave way to a sense of overwhelm. New tools seemed to appear constantly, each promising to solve a slightly different problem. It became harder to tell what was genuinely useful and what was mostly hype. As the novelty wore off and real-world use set in, I found myself becoming more skeptical. Many tools along with their models simply didn’t live up to the expectations set by their marketing, and the gap between promise and practice became increasingly obvious. That realization turned out to be helpful. Once I let go of the idea that AI was a sweeping solution, my outlook became more pragmatic. Instead of focusing on what AI should be able to do, I started paying closer attention to what it could realistically support right now. Thinking of AI as just another tool, rather than a transformational fix, made it easier to integrate into my work in meaningful, limited ways. It also became clear that using AI well would require adjusting workflows and rethinking processes, not simply dropping it into existing ones. Interestingly, that reset led to a renewed sense of cautious optimism. With more grounded expectations, AI felt less overwhelming and more manageable.

Parlette-Stewart: I appreciate everyone's honesty about their journey with AI. I think that openness is really helpful for everyone. My own journey has involved ongoing adjustments. In late 2023/early 2024, we had a project within our Library Teaching and Learning Committee that produced a white paper on AI and what we needed to be thinking about as an organization. At that point, I was cautiously observing but then I went on maternity leave where I didn't pay too much attention to how things were evolving. What shifted things for me was returning from maternity leave in late 2024 and starting a fellowship focused on Academic Integrity. I very quickly realized that academic integrity and AI had become so interconnected that I needed to completely reimagine my project and do a lot of professional development to gain more experience and knowledge in this area. A couple things have felt particularly key in my own experiences. One of those is hearing directly from students. We hosted a student panel at an AI in Academic Skills Workshop event in December, and the way they talked about uncertainty and the need for guidance really stuck with me. Another is that my own teaching practice has pushed me to constantly adapt. As a sessional instructor, I've redesigned my course assignments several times out of necessity over the past 18 months. I've been spending a lot of time thinking about how to create assignments that are meaningful learning experiences in the context of AI. I also worry that we're applying the old myth of the "digital native" to students in this context, assuming they'll naturally know how to use AI tools critically and ethically. (Some) students may be actively using the technology with varying degrees of success, but comfort doesn't equal critical digital or AI literacy. Like Joël, I've moved toward a more pragmatic stance, identifying where AI tools genuinely support my work rather than feeling pressure to adopt them wholesale. I'm also concerned about the literature around deep thinking and critical thinking (one of those studies). I think, like I said earlier, we're not going to get it right every time, and that needs to be okay. 

OCUL: Let's talk a little about how libraries are integrating AI into their services and workflows. Do you think there are risks involved with that kind of integration?

Rivard: One of the key risks of integrating AI into library services is maintaining user trust. Libraries are trusted institutions, and introducing AI can raise concerns about privacy, data use, and reliability. If users do not understand how AI tools generate results or handle information, that trust can be strained. Another risk is the potential for bias or misrepresentation in AI generated search results or summaries. Without sufficient human oversight, these systems may surface incomplete or skewed information, posing challenges to accuracy and equity. At the same time, not engaging with AI carries its own risks. As users become accustomed to natural language searching and responses, traditional search interfaces that rely heavily on Boolean logic may feel unintuitive and discouraging. This gap can reduce engagement with academic library resources. There is also a capacity risk. AI can help reduce time spent on repetitive, low stakes tasks, allowing library staff to focus on higher value work such as teaching, consultation, and user support. Without continued experimentation in using AI tools for library staff, libraries may limit their ability to shift effort toward the work that benefits users most.

Lee: I agree with Joël that there are risks both in integrating AI and in choosing not to integrate it. As I mentioned earlier, data is foundational in AI. Algorithms process data through a training process in which patterns and structures are learned. The outcome of this process is a model capable of generating outputs or making predictions based on those learned patterns. One of the risks of integrating AI into my area of work, metadata and cataloguing, is that it may reinforce dominant Western and colonial perspectives embedded in the training data and existing metadata standards, thereby excluding or marginalizing other ways of knowing. While many AI tools are now available to us, the underlying training data, algorithms, and models are often not transparent or publicly shared. This lack of visibility makes it difficult to assess bias, limitations, or alignment with our institutional values.

At the same time, I don’t think we can avoid AI both as a tool and as part of the next shift in how information is created, described, and accessed. I really appreciate the framing Joël and Kari suggested. That perspective helps ground the conversation in responsibility rather than fear or hype. If we opt out entirely, we may fall behind in addressing large-scale metadata gaps, particularly for unique or under-described collections where manual description at scale is not feasible. I see AI as an opportunity. It may help us surface and make discoverable collections that remain hidden due to limited metadata. Many institutions are already experimenting with AI to enhance metadata, particularly for records that previous had little or no descriptive information (like OCUL's Government Documents Project, SUNYLA Midwinter 2026, and CRL’s Pilot Project). I'm interested in learning from these initiatives while critically evaluating how such tools align with our values and responsibilities.

Parlette-Stewart: I agree that there are real risks in both directions. For learning and curriculum support, one of my primary concerns about integrating AI is how we support students and faculty in developing the academic skills we've always supported (research, writing, etc.) while building and maintaining critical thinking skills. I worry about creating learning environments that require students to exercise constant "self-restraint" to resist using tools that promise quick answers. That feels unfair and unrealistic. I'm also concerned about how academic integrity issues will evolve. There's growing consensus that we're past the point of reliable AI detection, and I think we need to fundamentally rethink how we teach and approach academic integrity. I'm inspired by the work of Sarah Eaton and others encouraging us to embrace "post-plagiarism" thinking: moving beyond detection and punishment toward designing authentic assessments and building cultures of integrity. 

Joël, your point about maintaining user trust really stood out to me. In academic skill support, we need to be clear about the value the human element brings. What is our expertise? What can we do that students can't replicate with an AI tool? At the same time, the risks of not engaging feel significant. AI is so deeply integrated into the information and education landscape. If we're not supporting students and faculty in navigating this landscape critically, we risk losing relevance in higher education. It feels like our choice is whether we help students develop critical AI literacy, or whether someone else fills that gap. I also think we have difficult conversations ahead within our organizations. Not everyone is comfortable with these tools or willing to engage with them for a variety of reasons, and we need to navigate complex feelings and experiences among our colleagues as we figure out how we're going to move forward.

OCUL: Given some of the risks you've touched on, are there ways you think libraries and their teams can navigate the waters of AI and academic libraries? And is there something unique that librarians and library workers bring to the table to support AI use and exploration?

Weaver: I've been thinking a lot lately about how AI has really thrown us into a state of great abundance. Abundance with the variety and availability of AI tools. Abundance with the amount of data and information. Abundance in extending the individual’s belief in what they can do with AI at their disposal. And, finally, abundance in all the visible and opaque ways AI is reshaping our relationship with the concepts of reality and truth. While I don’t have a magic solution, I do have great faith in librarians and library workers who have been navigating information abundance their entire professional lives. In our work we’ve labeled, categorized, and brought order to the abundance. We’ve developed and taught people strategies to evaluate and filter the abundance. We’ve empirically studied how students and scholars try to navigate the abundance themselves. We’ve considered and widely discussed how information abundance makes us feel as humans. And the nature of our work is relational and deeply rooted in connection and understanding of other humans and their needs, which is the piece that is pushing many past their initial reservations about AI and toward earnest engagement. In a world that feels like it’s always too much, that’s such a head start. 

Rivard: Academic libraries have always been defined by their relationship with information, and that remains our strongest asset as AI tools continue to evolve. Librarians and library workers bring a deep understanding of how information is created, organized, evaluated, and accessed. That expertise positions us to help our communities engage with AI in ways that are thoughtful, transparent, and aligned with our values. In many ways, we’re uniquely equipped to translate between emerging technologies and the information literacy principles that guide academic work. As teams begin navigating AI more intentionally, I think this is a good moment to shift from open ended exploration towards clearer, outcome driven experimentation. Rather than adopting tools simply because they’re new or widely discussed, I would recommend teams look at identifying specific service or workflow challenges and ask where AI might meaningfully improve the user experience. For certain teams, this could include setting up measurable goals to create a structure that helps teams evaluate what’s actually working. Internally, there’s also value in looking closely at routine, repetitive, or time consuming tasks. AI tools can certainly play a role here, but they shouldn’t be the only thing on our radar. A growing number of new platforms (some AI enabled, some not) are designed to streamline workflows and can address long standing process challenges. This is a good moment to reassess internal practices with a wider lens and identify where any well designed tool, not just an AI branded one, could make work more efficient. By broadening the scope of what we evaluate, teams can focus on solutions that genuinely improve operations rather than defaulting to AI simply because it’s the trend at the moment.

Parlette-Stewart: I think we can add the most value is in sharing our expertise and using that in meaningful engagement with our campus communities. I agree with Joël that we need to move towards outcome-driven experimentation. There's so much conversation happening about what AI tools can do, but we need to be thoughtful about what we're actually trying to achieve. I think we need to identify specific challenges and measure whether AI genuinely improves things. I've seen staff use AI effectively for time-intensive administrative tasks, which frees them to focus on work that requires their unique expertise and the human connection that defines our services. There's also significant potential for AI to improve accessibility, and I think that's an area where libraries should be leading thoughtfully. But leading well requires us to continue developing our own expertise so we can be confident partners with our communities. I agree that we need to maintain our core values through all of this. The relational aspect of our work, the empathy and care we bring to supporting students, faculty and colleagues is so important. I think that this will continue to be true as we engage with some of the more challenging aspects, including what Kari mentioned around truth and reality. We need to provide balanced perspectives to our users and not shy away from difficult conversations about where AI helps and where it falls short.

Robertson: I think a lot about the role of the library and library workers in the provenance of information. The culture of libraries is not just about finding information but sourcing it. It strikes that this puts us smack at the centre of the conversation about AI, since the trouble with AI is how it serves up information while obscuring its provenance. It means that AI is at the same time an existential threat, and a spectacular opportunity for libraries – if our knowledge economy no longer cares about the sourcing of information, we are dead in the water. However, if provenance retains its currency, then AI makes us an even more central institution and profession. To the question of what we can (or should) bring to the table: I would like to see us help users, students, researchers reflect more about how they come to know something in this new (and changing) environment – almost like a phenomenological approach, where we get people to reflect about real examples of how they use AI in their process of seeking to find out something, and where it fits and where it doesn't. That means taking a broader view of the process of seeking information and drawing on people's own experiences. I think that is exactly what we did with Wikipedia.

Lee: One last thing I would add is that it would be easier if we started from the problem. What problems are we trying to solve? Is it student information overload? Is it improving the discoverability of our collections? New user needs? As you all mentioned, we can shift our perspective that AI is a tool to help us address these challenges and support our mission. Whenever I feel overwhelmed by all these new things (Kari, you're so right – the abundance!), I try to go back to the problem and ask myself – is AI actually helpful in solving this problem? This framework helps me a lot personally.

March 27 Deadline for Subcommittee Nominations

Several OCUL subcommittees are seeking nominations for groups that lead collection assessment, collaborative platform systems and analytics, and more. 

For OCUL member library workers, serving on a subcommittee is an opportunity to strengthen collective capacity and directly enhance shared library services and resources provided to campus communities across the province. Find the full list of subcommittee opportunities and submit an online nomination. Nomination period closes March 27.

Welcome to Nicole Morgan

We are thrilled to welcome Nicole Morgan to the OCUL team! As the new Network Zone Collections Coordinator, Nicole will be supporting collection work in the Omni Network Zone and maintaining consortial license information. Nicole brings demonstrated expertise in e-resource coordination and content management, having previously been a team member at Ontario Colleges Library Service where she played a key role in delivering shared services to the province’s 24 colleges. Please join us in giving Nicole a warm welcome to OCUL! 

Many Roads, One Destination: Student Accessibility Initiatives

Learn how accessibility is at the forefront of work to build inclusive student spaces, programming and campus communities at Carleton University and Toronto Metropolitan University. Join the next session in a series of virtual discussions hosted by the OCUL Accessibility Community! 

  • Date and time: Monday, March 9, 1-2 p.m. Eastern Time
  • Format: Online via Zoom

Register to attend

ICYMI: What's Happening at OCUL

OCUL Visiting Researcher to Explore How AI Challenges Authorship and Originality – Stephen Spong joins OCUL as this year’s visiting researcher, with a project that will explore how generative AI challenges traditional understandings of creativity, originality, and authorship.

OCUL Strengthens AI Strategy Through Collaborative Advisory Committee – An advisory committee has been struck at OCUL to shape a longer-term plan for engagement with AI at the consortial level.

OCUL Advances AI and Machine Learning Initiative with Major Project Milestones – In this progress update, read about two pilot projects reaching major milestones and an expanded capacity building program that offers learning opportunities across online events, a blog partnership, and curated reading club. 

Your Feedback

  • Have a story you think would be a fit for the OCUL Newsletter? Email katrina.fortner@ocul.on.ca to share your idea.
  • We are committed to providing equitable access to OCUL online publications. To provide feedback on the accessibility of our newsletter or to request an alternative format, please contact ocul@ocul.on.ca.
  • Next newsletter issue: August 2026