The Distant Librarian – You’re awesome for using RSS!

New Survey Report Shows How Library Workers Use AI in Ontario

New Survey Report Shows How Library Workers Use AI in Ontario

A new report shows how Ontario library professionals are using AI tools in their day-to-day work and their perspectives on these burgeoning technologies. See the press release and access the 8-page PDF at
https://ocul.on.ca/ai-machine-learning-2025-survey-report
Surely Ontario isn’t unique?
This is one of the first reports I recall seeing that includes at least a few specifics on
how
Library Workers are
using
AI tools, and
which
tools they’re using, though there still aren’t many specific cases outlined. Still worth the time to read.
If I’d answered the survey, my responses would’ve shown the following tools used, in roughly this order: Perplexity, ChatGPT, Claude, Microsoft Copilot (our institutionally-supplied tool). And using the categories from the survey, I use these tools in this order: To fill gaps in disciplinary knowledge (including replacing traditional web searching with Perplexity), brainstorming, and coding.
Not obviously addressed in the survey, I have paid out of pocket to use some of these tools, though have not taken out an annual subscription yet. In fact, I just hit a limit on Claude that makes me think I’ll have to toss them another month’s revenue ($28 Cdn in this case) to finish a project I’m working on. I have also paid out of pocket for API access to a number of LLMs – probably sitting at $50-$60 Cdn in total for those.
What are YOU actually using, and for what purpose(s). Is work paying for it, or are you?
Oh, and what would you call someone who experiments with multiple models/tools? The best I’ve come up with is
polyAImorous

Typepad Shutdown Announcement

Typepad Shutdown Announcement

Yikes! I can’t say I’m surprised, but I just received the following email from Typepad support (this blog is hosted on Typepad):
We want to inform you that we have made the difficult decision to discontinue Typepad, effective September 30, 2025.
What Does This Mean for You?
After
September 30, 2025
, access to Typepad – including account management, blogs, and all associated content – will no longer be available. Your account and all related services will be permanently deactivated.
Please note that after this date, you will no longer be able to access or export any blog content.
What Do You Need to Do?
If you need to retain your content, please export your content
before September 30, 2025
. After this date, your content will no longer be accessible to you and will not be available for export.
You can find more information on exporting
here
.
Refunds & Final Billing
Effective August 31, 2025, we will no longer charge you for services.
If you have made a recent payment, we will attempt to issue a
prorated refund
to the payment method on file.
Please
verify that your payment method on file is up to date
to ensure successful refund processing.
Have Questions or Need Assistance?
If you have any questions, please refer to our Frequently Asked Questions page
here
.
If you have any additional questions or need help, please open a ticket at Help > New Ticket from your Typepad account.
We truly appreciate your business and apologize for any inconvenience this may cause. Thank you for being a valued customer.
Sincerely,
Typepad
Right now, there’s no confirmation on the actual site, but I guess I’ll be looking for a new home, and possibly a new domain! Stay tuned.

How are information professionals in the UK using Generative AI?

How are information professionals in the UK using Generative AI?

A recent report from CILIP, the Library and Information Association in the UK, provides results from a small survey of 162 “information professionals” in the UK from late 2024.
AI and the UK Library Profession: Survey Report 2025
runs 33 pages long, but much of that consists of selected open-text responses to the survey.
I found that the results closely mirrored what I’m seeing at MPOW and in North America, except for reference chatbots, which are an important thing in my library, but apparently not so much over there or
over here
, generally.
I always like to hear specifically what tools others are using, and here the top three were ChatGPT, Copilot and Claude, but very closely followed by Gemini.
Two quotes that stuck out:
The commonest activity that there is in the area of AI literacy, is training users to understand AI as an aspect of information literacy, rather than direct uses on AI services.
Again, that’s just like here, and:
It may be significant that fears about job displacement among librarians did not appear frequently in comments. There was no direct question in the survey about this but it did not appear as an issue in open text questions.

Science journalists find ChatGPT is bad at summarizing scientific papers (but are they, really?)

Science journalists find ChatGPT is bad at summarizing scientific papers (but are they, really?)

As
reported by Ars Technica
, with many more details in the
White Paper
(PDF) written by the Science Press Package team, SciPak.
I have no reason to doubt the findings, but do note the caveats that appear in the paper itself, that,
This does not mean that the LLM has no potential value as a tool for other science communication
outlets. The findings of this project are specific to ChatGPT Plus’ adherence to SciPak style and
standards. Moreover, this assessment could not account for human biases…
Regarding that last point, Ars Technica points out,
…which we’d argue might be significant among journalists evaluating a tool that was threatening to take over one of their core job functions.
The actual prompts used by the evaluators are listed in the appendix of the paper (pg. 9), and are a nice illustration of how one should write a prompt if one is looking for a specific type of response. Sadly, the paper doesn’t indicate whether the results of that most-specific prompt were generally better than the less-specific ones:
In early April 2024, the team revised the writer survey to include more specificity. Before then, each
writer who nominated a paper reviewed the overall ability of ChatGPT Plus, assessing its collective
performance across the three generated summaries. After the revision, writers evaluated the LLM’s
performance for each individual summary instead. This led to a more detailed interpretation of the
LLM’s skills. Because this data is qualitative and anecdotal, it does not lend itself to graphs.
It’s important to do your own testing, I think, because one of the ways we’re seeing students, especially, use LLMs is for exactly this purpose – summarizing longer and more difficult papers. If the summarizations are
wrong,
that’s obviously concerning, but if the summarizations are right, but don’t conform to a particular style, that’s much less concerning, IMHO, and could possibly be corrected through better prompting.

Are you getting your news from AI? You might want to reconsider that…

Are you getting your news from AI? You might want to reconsider that…

New research
coordinated by the European Broadcasting Union (EBU) and led by the BBC has found that AI assistants – already a daily information gateway for millions of people – routinely misrepresent news content no matter which language, territory, or AI platform is tested. The CBC and Radio-Canada were participating organizations.
The actual report is a 69-page PDF
, and includes lots of graphs and examples. It’s actually a good read! Interesting to note that this study only focussed on Public Service Media organizations (PSM), not any commercial news outlets.
Here’s your jaw-dropper:
Overall, 45% of all AI responses were found to have at least one ‘significant’ issue. When including ‘some issues’, 81% of responses have an issue of some form.

Free Course – RDMLA: AI for Librarians

Free Course – RDMLA: AI for Librarians

From a press release:
The RDMLA team is thrilled to announce the launch of our newest course:
RDMLA: AI for Librarians!
Artificial Intelligence is rapidly transforming the landscape of data and information services—and librarians are at the forefront of this change. That’s why we created
AI for Librarians
: a practical, hands-on course designed to help you build AI competencies in ways that directly support library services and workflows. No prior RDMLA coursework is required—this course is free and open to all learners around the world!
What’s Inside?
RDMLA: AI for Librarians
introduces you to the fundamentals of AI while emphasizing ethical, responsible, and library-focused applications. The content is rooted in real-world scenarios you’ll encounter in library practice.
We’re launching today with
multiple brand-new units
:

AI Tools for Library Research
– Explores how generative AI tools can support library research questions in innovative ways.

AI Ethics
– Examines key ethical challenges, applies frameworks for responsible AI use, and outlines policy recommendations for AI integration in libraries.

AI Use Cases
– Showcases how AI can streamline library workflows and enhance user services.
And this is just the beginning, stay tuned for more
AI Use Cases
to be added in 2026!
Thanks to the generous sponsorship of
Elsevier
,
RDMLA: AI for Librarians
is
completely free
and available worldwide. All materials are hosted on the Canvas Network under a CC-BY-NC-SA license.
To access the course, please enroll via:
https://www.canvas.net/browse/simmonsu/courses/rdmla-ai-for-librarians
Note
: RDMLA is the Research Data Management Librarian Academy. More info at
https://rdmla.github.io/

ScienceDirect is marketing AI directly to students and researchers?

ScienceDirect is marketing AI directly to students and researchers?

Very interesting – I just found myself looking at
an article
in ScienceDirect, and was presented with a large panel touting an AI Reading Assistant, something I know we don’t subscribe to at the U of Calgary. I wondered if maybe we DID subscribe to it, but it hadn’t yet been announced. After signing in to my personal account, I noticed I was already down to 4 articles remaining until December 19, which certainly wasn’t a behaviour consistent with a subscription! I was then led to
https://researcher.elsevier.com/
, which was
first captured by the Wayback Machine
last month, on October 7, 2025.
There, I find that I can personally subscribe to ScienceDirect AI for US $25 / month or US $249 / year. There’s also a link to get in touch with sales for institutional access; I wonder how many hits that gets.
The FAQs go on to explain that purchasing SDAI won’t actually get you access to any additional content you don’t currently have. “Access to underlying content depends on your institution’s existing ScienceDirect subscriptions or any individual content purchases you make.” And, “…you can use ScienceDirect AI’s features regardless of your institution’s content access. However, the Reading Assistant tool is only available on documents you are entitled to access.” So that’s a strange potential mish-mash of institutional and personal accounts.
Do any other academic vendors market services directly to the end user? I can’t recall seeing anything like this before!

Mita’s observations on gatekeeping

Mita’s observations on gatekeeping

When I saw the title of Mita’s most recent post,
The internet was designed to route around gatekeepers
, I was expecting and hoping for a cool example or tool along the lines of
Jump the Paywall
. Alas, her post is actually a sober reminder that “publishers don’t need academic libraries to reach faculty or students anymore.” This, of course, isn’t new, but I have to admit, even after posting
one of her examples
, I hadn’t really put all of what she describes together for myself.
She goes on to suggest that libraries might consider subsidizing reviewers rather than covering article processing fees for open-access titles. The idea is that if professionals were paid rather than volunteering, the whole peer-review process might move a little faster. I like that idea
But what I *really* took away from her post was the reminder of something those outside the profession probably don’t know about us; how strongly we take patron privacy, and how, in going directly to the consumer (student, faculty), publishers can learn an awful lot more about users than they ever could when users were consistently anonymized behind a proxy server.
Ehh, I’m sure that’s
just
fine
.