«¿Cómo se siente una inteligencia artificial? ¿Es capaz de experimentar emociones como el miedo o la sorpresa? La pregunta parece absurda en un mundo donde las máquinas son vistas como objetos fríos y calculadores. Sin embargo, en el campo de la investigación sobre inteligencia artificial, algunos científicos están explorando la posibilidad de dotar a estas…
«¿Cómo se siente una inteligencia artificial? ¿Es capaz de experimentar emociones como el miedo o la sorpresa? La pregunta parece absurda en un mundo donde las máquinas son vistas como objetos fríos y calculadores. Sin embargo, en el campo de la investigación sobre inteligencia artificial, algunos científicos están explorando la posibilidad de dotar a estas herramientas de una forma de «sentir» o experiencia emocional.
Este concepto se conoce como «reglas del sentir» (feeling rules) y podría revolucionar nuestra comprensión de lo que significa ser inteligente. Pero ¿qué implicaciones tiene esto en el desarrollo de la inteligencia artificial? ¿Podrían las máquinas un día emular la complejidad de nuestras emociones?»
¿Qué son las «reglas de sentir» en el contexto del aprendizaje automático?
Las «reglas de sentir» en el contexto del aprendizaje automático son algoritmos diseñados para capturar las asociaciones emocionales y sentimientos que se asocian con ciertas palabras o conceptos. Estas reglas permiten a los sistemas de inteligencia artificial (IA) comprender la complejidad de las emociones humanas en el lenguaje, ayudando a mejorar la precisión en tareas como la clasificación de textos y la recomendación de contenido.
Por ejemplo, una «regla de sentir» podría detectar que la palabra «feliz» está asociada con un sentimiento positivo, mientras que la palabra «tristeza» se asocia a uno negativo. Esto permite al sistema IA emitir respuestas más relevantes y adecuadas según el contexto emocional del texto.
Estas reglas no son solo aplicables a textos, sino también pueden ser utilizadas en otros contextos como la clasificación de imágenes o el reconocimiento de voz, permitiendo que los sistemas de IA comprendan y respondan adecuadamente a las señales emocionales humanas.
¿Cómo se aplican estas reglas en diferentes sectores o industrias?
La aplicación de las «reglas del sentir» en diferentes sectores o industrias depende de su contexto y fines. En el ámbito jurídico, por ejemplo, estas reglas pueden ser utilizadas para evaluar la toma de decisiones de un sistema de inteligencia artificial (IA) en escenarios legales complejos.
En el sector financiero, las «reglas del sentir» podrían ayudar a los sistemas de IA a entender mejor las emociones y los comportamientos humanos en el mercado, lo que podría mejorar la toma de decisiones en materia de inversiones.
¿Por qué es importante considerar las emociones y sentimientos en la implementación de algoritmos de aprendizaje automático?
La implementación de algoritmos de aprendizaje automático en diversas áreas como la salud, la educación y la seguridad nacional ha aumentado significativamente en los últimos años. Sin embargo, existe un aspecto crucial que a menudo se ignora: las emociones y sentimientos humanos.
Considerar las emociones y sentimientos en la creación de algoritmos es fundamental para asegurar que estas herramientas sean justas, equitativas y respetuosas con la diversidad humana. Por ejemplo, un algoritmo que evalúa la capacidad de un estudiante puede penalizar a un niño que no ha podido acceder a las mismas oportunidades de aprendizaje debido a su entorno socioeconómico. Esto podría llevar a una mayor desigualdad en el sistema educativo.
Al considerar los sentimientos y emociones, podemos diseñar algoritmos más humanos e inclusivos. Por ejemplo, un algoritmo para la detección de enfermedades puede ser entrenado para reconocer signos de ansiedad o estrés que pueden estar relacionados con una condición médica específica, lo que permitiría a los profesionales sanitarios ofrecer tratamiento más personalizado y efectivo. Al abordar las emociones y sentimientos en la creación de algoritmos, podemos construir tecnología que no solo es eficiente sino también respetuosa con el ser humano.
En un futuro cercano, la capacidad de que los sistemas de inteligencia artificial experimenten emociones podría revolucionar la forma en que interactuamos con tecnología. Podríamos ver aplicaciones más humanizadas y empáticas en campos como la asistencia médica, el aprendizaje personalizado o incluso la creación de contenido artístico. Pero antes de que esto suceda, es importante reflexionar sobre las implicaciones éticas de crear máquinas que «sientan» emociones. ¿Qué significa realmente ser inteligente si no podemos distinguir entre una emoción genuina y una simulada? ¿Y qué ocurre cuando estas emociones se utilizan para manipular o influenciar a los humanos? Es hora de empezar a explorar estas preguntas y considerar cómo queremos que evolucione la relación entre la tecnología y las emociones en el futuro.
Fuente: In Brief Since the public debut of ChatGPT in November 2022, the calls for librarians to adopt and promote generative AI (GenAI) technologies and to teach AI literacy have become part of everyday work life. For instruction librarians with reservations about encouraging widespread GenAI use, these calls have become harder to sidestep as GenAI technologies are rapidly integrated into search tools of all types, including those that libraries pay to access. In this article, I explore the dissonance between, on the one hand, instruction librarians pedagogical goals and professional values and, on the other, the capacities, limitations, and costs of GenAI tools. Examining discourse on GenAI and AI literacy, I pay particular attention to messages we hear about the appropriate ways to think and feel about GenAI. These feeling rules often stand in the way of honest and constructive dialogue and collective decision making. Ultimately, I consider work from within and outside librarianship that offers another view: that we can slow down, look honestly at GenAI capacities and harms, take seriously the choice some librarians may make to limit their GenAI use, and collectively explore the kinds of futures we want for our libraries, our students, fellow educators, and ourselves. By Andrea Baer At the April 2025 Association of College & Research Libraries Conference, academic library workers gathered in person and online to explore the theme Democratizing Knowledge + Access + Opportunity. Before sessions about how to integrate generative AI (GenAI) tools into essential public services like teaching and research services, sociologist and professor of African American Studies Ruha Benjamin offered the opening keynote. Articulating the resonance of the conference theme for her, Benjamin reflected, One way to understand the stakes of this conference, … why its so vital that we work in earnest to democratize knowledge, access, and opportunity at a moment when powerful forces are working overtime to monopolize, control, and ration these social goods, is that this is a battle over who gets to own the future, which is also a battle over who gets to think their own thoughts, who gets to speak and express themselves freely, and ultimately who gets to create (Benjamin, 2025). Noting that technologies are never neutral but rather reflect the values or lack thereof of their creators, Benjamin drew a connection between current attacks on libraries and higher education and a category of technology that was prominent throughout the conference program: artificial intelligence. [I]t should give us pause, she asserted, that some of the same people hyping AI as the solution to all of our problems are often the ones causing those problems to begin with. Applause followed. Though Benjamin did not name the prominence of AI across conference sessions, I was probably not the only person to notice the contrast between Benjamins critique of AI hype and the prevalence of conference sessions about promoting AI technologies and AI literacy in libraries. As Benjamin continued, she turned to the chilling words of JD Vance at the 2025 Paris AI Summit: Our schools will teach students how to manage, how to supervise, and how to interact with AI-enabled tools as they become more and more a part of our everyday lives. As I listened, I thought these words could easily be mistaken as part of a talk on AI literacy by an academic librarian or educator with better intentions. I wondered how many others were thinking the same thing. Benjamin then reminded the audience of Vances ideological perspective, as she observed that in January 2021 Vance gave a speech at the National Conservatism Conference entitled, The Universities are the Enemy, in which he argued that universities must be aggressively attacked to accomplish his and his audiences goals for the country ( Vance, 2021 ). Its worth taking a brief step away from Benjamins keynote to point out that a couple of weeks after her talk, on April 23, President Donald Trump issued an executive order to promote AI literacy through a new the White House Task Force on AI Education that will establish public-private partnerships to provide resources for K-12 AI education, both to enhance AI-related education but also to better utilize AI tools in education generally. The executive orders Fact Sheet states that AI is rapidly transforming the modern world, driving innovation, enhancing productivity, and reshaping how we live and work. Thus, [e]arly training in AI will demystify this technology and prepare Americas students to be confident participants in the AI-assisted workforce, propelling our nation to new heights of scientific and economic achievement ( The White House, 2025 ). This laudatory language about AI is perhaps unsurprising for an administration that established the Department of Government Efficiency (DOGE). DOGE purportedly aims to reduce the government waste, fraud, and abuse, largely through eliminating government jobs and replacing workers with a combination of automation and tech workers who have been directed to violate digital privacy rights and regulations ( Klein, 2025 ; Salvaggio, 2025 ). What is perhaps more striking is the similarity between the White Houses rhetoric and that of many educators in universities and academic libraries. Can the difficulty of distinguishing between the dominant AI rhetoric in higher education and that from political leaders who have explicitly named universities as the enemy be a wake-up call for people in higher education and in libraries, a message that we need to give more weight to the ethical concerns surrounding GenAI technologies? [1] Benjamin did not dwell long in her ACRL keynote on Vances vision for GenAI and AI literacy. Instead, she devoted most of her time to exploring imagination as a powerful means through which to envision the kinds of worlds we want to live in and to begin building. As she noted, imagination can envision dystopian futures, but it can also open more hopeful possibilities for the futures we want. What if we took imagination seriously? she asked. Not as flights of fancy, but imagination as a resource, a capacity, a muscle? How might the powers of our collective imagination begin to transform the world around us? (Benjamin, 2025). Here Benjamin articulated what I believe many in the academic library community have been thinking and feeling in the last few years, as pressure to integrate GenAI tools into library systems and library work has intensified, often accompanied by brief and perfunctory acknowledgements of GenAIs present and potential harms that are then set aside. Benjamin was inviting us to imagine alternatives to the narrative that GenAI technologies are the inevitable future of nearly all intellectual work. As I will explore, this process of imagining can include critically examining discourses about GenAI and AI literacy, as well as being curious about and attentive to our own affective experiences in response to GenAI technologies and discourses about them. If we accept this invitation to imagine, we might (re)discover what becomes out of view when so much of our attention is focused on a particular vision of the future of GenAI proliferation. We might widen our ideas of what is possible and nurture some sense of collective agency to work for the kinds of futures we want. Of course, our individual imaginings and feelings dont always match what a majority (real or imagined) appear to share. My own conceptions of, approaches to, and feelings about GenAI and AI literacy usually seem out of sync with the dominant discourse in higher education and librarianship (though with time I learned I have some company). Like many others, I am deeply concerned about the real and present costs of GenAI technologies that are rapidly being integrated into search and library tools. I am also unsettled by widespread overconfidence in these technologies abilities to generate mostly reliable information and to support research and learning. Both as a librarian and more recently as a professor of practice, I have struggled with how to understand and respond to the enthusiastic calls in higher education and academic librarianship for teaching a version of AI literacy which requires that educators and students use these tools, while giving limited attention to ethical questions surrounding GenAI. So often calls to teach AI literacy contribute to AI hype by misrepresenting GenAIs capacities, minimizing acknowledgement of its harms, and implying that critique of GenAI stands in the way of human progress. We frequently hear that GenAI technologies are the inevitable future of the world and of libraries and that the only viable option is to embrace them, and quickly, before we fall behind. This message of urgency fits into an older narrative that libraries must embrace technological change or otherwise become obsolete (Birdsall, 2001; Glassman, 2017; Espinel and Tewell, 2023). Through this article, I hope to further encourage what Benjamin recommends: rather than rushing to adopt and promote new technologies whose ethical implications raise major questions, we might slow down and claim more time and space for considering the present and potential implications of GenAI adoption and use. That time and space is necessary for the more expansive collective imagining that Benjamin proposes, imagining that takes into consideration the power and social structures that currently exist and those that we want to exist. Of course, what we imagine to be desirable or possible is heavily shaped by our environments, social relationships and interactions, and the ideas and messages we encounter every day. Making space for imagination therefore also means making space for individual and collective inquiry that includes establishing agreed-upon facts about GenAI, rhetorical analysis of GenAI discourses, and critical reflection on our own thoughts and feelings about GenAI technologies and discourses. Being inclusive and expansive in imagining the futures we want also requires investigating the social expectations and pressures that often influence what we do and do not say in various professional circles. With these things in mind, in this article I consider the dissonances between what we know about the limitations and harms of GenAI technologies and the imperatives we hear to adopt them. Can we reconcile the tensions between, on the one hand, the harms of GenAI technologies and, on the other, professional values like those articulated in the ALA Core Values of Librarianship, which include equity, intellectual freedom and privacy, public good, and sustainability ( American Library Association, 2024 )? And if we could magically resolve many of those tensions through a radically transformed AI infrastructure that is environmentally sustainable and does not depend on the exploitation of human labor, what might we lose when we offload cognitive tasks like searching for, selecting, reading, or synthesizing sources to GenAI technologies? What do we value about education and information literacy practices that need to be preserved with foresight and intention? Because my work as a librarian and as an educator centers on teaching and learning, I am especially interested in how we conceptualize and approach teaching what is often called AI literacy. A necessary step in this process of imagining is investigating the messages embedded in much of academic and library discourse about GenAI technologies and the appropriate ways to think and feel about them (what sociologist Arlie Hochschild might call feeling rules). For instruction librarians, this process includes examining conceptions and framings of AI literacy and its role in information literacy education. A critical analysis of this discourse can help us open conversations about what we want information literacy instruction and library search tools to look like and do. This inquiry can also help us identify ways we have choice and agency in our own use of and teaching about GenAI tools. After an initial consideration of the feeling rules of GenAI and dominant discourse on AI literacy, I finally consider alternative ways to think about GenAI and to respond to calls for widespread adoption. Looking to work from within and outside librarianship, I consider another view: that we can slow down; take time to look honestly and critically at what we know, think, and feel about GenAI and its impacts; and consider ways to work toward the kinds of futures that align with our professional values. Part of this process is allowing space for more critical and skeptical perspectives on and feelings about GenAI, including nuanced arguments for AI refusal, a term I unpack in more detail later. Feeling Rules An impetus for my writing is that I see in much of our professional discourse and interactions a tendency to dismiss or minimize critiques of GenAI technologies, and sometimes even a shaming of those Luddites who do not embrace the technological changes of the day. [2] As I have argued elsewhere and further consider in this article, an especially powerful strategy for shutting down critique of GenAI is the construction and imposition of feeling rules: social expectations about the appropriate ways to feel and display emotion in a given context (Hochschild, 1979, 1983 ; Baer, 2025 ). Feeling rules, as first described by sociologist Arlie Hochschild, are social norms that prescribe what feelings are and are not appropriate to have and express (Hochschild, 1979). Though feeling rules are not confined to the workplace, they are a powerful part of the emotional labor we do in our places of employment ( Hochschild, 1983 ). [3] Feeling rules are typically discussed in the context of specific moments of social interaction among individuals, while in this article I apply them to our social relationships on both a micro- and a macro-level that is, as evident not only in discrete individual social interactions but also in discourse about GenAI technologies that is informed by social relationships. While feeling rules are usually established by those in positions of power, they are often internalized by those for whom the feeling rules are intended (Hochschild, 1979, 1983). In the case of GenAI, messages that librarians should be enthusiastic and optimistic about technological changes, which are frequently described as inevitable, often imply or outright assert that those who question or resist certain technological developments are simply overwhelmed by irrational fear and anxiety that they need to overcome. Giving too much attention to those unpleasant emotions or their underlying thoughts, the discourse often goes, risks making the profession obsolete. Many of the feeling rules we observe in librarianship or higher education, of course, are influenced by social conditions and norms that extend beyond them. Search for the term AI anxiety and you will find articles explaining it as a psychological condition to be overcome by integrating AI technologies into your everyday life ( Comer, 2023 ; Cox, 2023 ; Okamoto, 2023 ). The antidote to AI anxiety, according to its experts: accept and embrace the technology. For example, in the BBC article AI Anxiety: The Workers who Fear Losing their Jobs to AI, PricewaterhouseCoopers (PwC) Global AI and Innovation Technology Leader, Scott Likens, explains, In order to feel less anxious about the rapid adoption of AI, employees must lean into the technology. … Instead of shying away from AI, employees should plan to embrace and educate ( Cox, 2023 ). But what if emotional responses like AI anxiety are in large part deeply intelligent, a recognition of the unsettling facts about how most GenAI tools currently are built and deployed and what harmful impacts they already have? What if the cognitive dissonance that many of us experience when reading articles about AI anxiety or the necessity of AI adoption is worth our attention and curiosity? There is a stark mismatch between, on the one hand, imperatives to rapidly adopt and promote GenAI technologies and, on the other, the extensive documentation of the unethical labor practices upon which GenAI is built, as well at GenAIs detrimental impacts on the environment, local communities, and society more broadly ( Bender et al., 2021 ; Crawford, 2024 ; Electronic Privacy Information Center, 2023 ; Nguyen & Mateescu, 2024 ; Shelby et al., 2023 ). Despite this, librarians who are reluctant to adopt GenAI are frequently described as regressive and even harmful to a profession that must adapt to remain relevant. This shaming closes off open dialogue and critical thought. For many librarians who teach, the calls to adopt GenAI, promote its use, and teach a kind of AI literacy that encourages others to do the same adds to this dissonance. We repeatedly hear that GenAI is the future of work and the university and that we must therefore embrace it in our own work and teaching, regardless of our own views. Projects and initiatives at our places of employment and in our professional associations urge us to use these tools robustly, partly so we can help students, faculty, and community members keep up and succeed in our ever-changing world. Library vendors and the technology companies that libraries and universities pay for services and subscriptions continue to integrate GenAI tools into their platforms, usually offering people little to no choice in whether they use these extractive tools (though perhaps its time that libraries demand more choice from our vendors). The apparent lack of criticality toward these vendors and companies is further perpetuated by the refrain that librarians must teach the AI literacy skills that students and researchers will inevitably need. When we do hear about the problems with GenAI technologies, like the persistent inaccuracies in information generated from large language models (LLMs), or the extensive list of GenAIs environmental and societal harms, reservations are usually a short footnote, followed by a call for ethical GenAI use that sidesteps the fact that using GenAI technologies in their current forms inevitably means adding to their harmful impact. While some AI technologies may be beneficial in specific domains and justified for narrow use cases for example, machine learning in some instances of medical diagnosis and drug discovery ( Ahmad et al., 2021 ; Suthar et al., 2022 ) they are now being integrated widely and indiscriminately across domains, including in areas where they often hinder human thought more than they support it. As Shah and Bender argue, the LLMs being integrated into library search systems that supposedly save people precious time may actually prevent the exploration, discovery, and information literacy development that these resources have long been meant to enable ( Shah & Bender, 2024 ). Their argument is further supported by accumulating research on the detrimental effects of the cognitive offloading of tasks to GenAI ( Gerlich, 2025 ; Shukla et al., 2025 ). I see detrimental impacts of GenAI reliance directly in my own work teaching academic research and information literacy. Increasingly, a large portion of students have turned to GenAI to do nearly all the cognitive work that previously would have taken them so much time and effort. In the process, many if not most of these students are not developing the critical thinking and writing skills that have long been considered foundational to higher education. I also see a smaller group of students who are deeply concerned about the costs of GenAI and who are choosing the more labor-intensive path of developing and articulating their own thinking, rather than immediately turning to chatbots. The latter group is learning far more, and is far better prepared for the workplace and meaningful participation in society more broadly. The contrasting perspectives and behaviors of my students reflects that students views and uses of GenAI are, like ours, not monolithic. And also like us, students hear many of the same simplistic messages: that GenAI is an amazing technology that will make work faster and easier and that the only way to be prepared for the workplace and relevant in the world is to embrace GenAI. In academic libraries, those who want to take a slower and more cautious approach to GenAI are frequently criticized as holding the profession back, resisting the inevitability of technological change, inhibiting progress, neglecting to prepare students for the future, and denying reality. Such criticisms have a silencing effect, discouraging people from expressing their legitimate concerns about a technology that in the widest circulating discussions is surrounded by more hype than critical investigation. But when we can free ourselves of shaming rhetoric, we are better positioned to both support one another as respected colleagues and to think critically, and imaginatively, about how we want to engage with and teach about GenAI technologies. Given the prevalence of hype and misunderstandings surrounding GenAI, unpacking discourse on GenAI and AI literacy is a powerful and necessary part of this work. Rhetorics of AI Literacy Calls for embracing GenAI in higher education and academic librarianship are frequently accompanied by declarations that AI literacy is one of the most essential skills that students must now develop to be prepared for the workforce and for the future in general. Definitions of AI literacy and related competencies regularly add to the AI hype that Benjamin cautions against, as they repeatedly misrepresent GenAIs abilities, mandate GenAI adoption, and reinforce the message that GenAI is the inevitable future which we must therefore embrace through adoption and active use. Like GenAI discourse more broadly, AI literacy rhetoric often includes brief asides to consider the potential risks of AI technologies to ensure they are used ethically and responsibly. Like a perfunctory checklist, these acknowledgements rarely offer a meaningful examination of the extensive harms of GenAI, nor do they confront the reality that more ethical use will only be possible with radical changes to GenAI technologies and their infrastructures. With the emphasis on adoption and use, this discourse leaves little to no room for considering the possibility of non-use or critical examination of use cases that might not warrant AI use. Consider, for example, the AI Literacy Framework developed by academic and technology teams at Barnard College. Based on Blooms taxonomy, it is composed of four levels: 1) Understand AI, 2) Use and Apply AI, 3) Analyze and Evaluate AI, and 4) Create AI. Here, using AI precedes considering critical perspectives on AI, such as ethical concerns. After students have engaged with level 3, where they Analyze ethical considerations in the development and deployment of AI, the next level (4) mandates creating more of these technologies ( Hibbert et al., 2024 ). Stanford University Teaching Commons AI literacy framework, which emphasizes human-centered values, similarly begins with developing a basic understanding of AI tools, in part through AI use (functional literacy). Following functional literacy is ethical AI literacy, which involves understanding ethical issues related to AI and practices for the responsible and ethical use of AI tools. Again, non-use is not presented as an option. Instead, the framework authors explain, You and your students can identify and adopt practices that promote individual ethical behavior and establish structures that promote collective ethical behavior ( Teaching Commons, Stanford University, n.d. ). [4] As these AI literacy frameworks suggest, much of the literature on AI literacy reflects a strange mixture of the AI inevitability narrative, superficial acknowledgement of ethical concerns, and AI hype that frames GenAI as a transformative force that will better society. AI literacy frameworks created within librarianship frequently share these characteristics. ACRL President Leo Los 2025 AI Literacy: A Guide for Academic Libraries is one such influential document. It is described as a guide to AI literacy that addresses technical, ethical, critical, and societal dimensions of AI, preparing learners to thrive in an AI-embedded world. In this new world, librarians can become key players in advancing AI literacy as technology shapes the future ( Lo, 2025 , p. 120). What that future looks like, or what we want it to look like, is not discussed. Like other AI literacy frameworks, Los guide predicates AI literacy on AI use, as the document defines AI literacy as the ability to understand, use , and think critically about AI technologies and their impact on society, ethics, and everyday life [my emphasis] (Lo, 2025, p. 120). As with the previously mentioned AI literacy frameworks, this document presents AI as pervasive and socially beneficial, while omitting a meaningful examination of the material conditions on which creating and using these technologies currently rests. At various points, the guide briefly notes the need to consider the limitations and ethics of GenAI tools, statements that are quickly followed by an emphasis on AI adoption and promotion that supports the common good, social justice, and empowerment. Consider, for example, the section on the societal impact of AI on the environment and sustainability: While AI remains resource-intensive with a notable environmental footprint, discussions on sustainability should encompass more than just reducing consumption. The real potential lies in using AI to drive systemic changes that promote social and environmental well-being . For example, AI can optimize energy management in cities, creating smarter, more sustainable urban environments. It also has the capacity to revolutionize agricultural supply chains, increasing efficiency, reducing waste, and supporting sustainable practices across production and distribution. By integrating sustainability into the societal dimension of AI literacy, we can better understand AIs role not just as a technological advancement, but as a force capable of reshaping our economic, social, and environmental landscapes for the better . [my emphasis] (Lo, 2025, p. 122) Here, a minimization of the costs of AI coexists with an idealization of a future made possible by AI. No references are made to the water-thirsty and energy-hungry data centers rapidly being built to power GenAI, or how these data centers disproportionately harm economically disadvantaged communities and areas that are especially prone to drought ( Barringer, 2025 ). If such harms seem like a distant problem that does not affect most of us, we are likely to be proven wrong. For example, in my current home of Austin, Texas, which is prone to both drought and power grid failures, data centers are big business ( Buchele, 2024 ). The influential role of Los AI Literacy Guide is further reflected in another key ACRL effort to promote the integration of AI in academic libraries: the ACRL AI Competencies for Academic Library Workers (AI Competencies) ( ACRL AI Competencies for Library Workers Task Force, 2025 ). The first draft, published online this past March, builds on Los AI Literacy Guide. Like Los AI Literacy Framework, AI Competencies does not consider whether GenAI tools are the optimal technologies for information literacy education, library research, or critical inquiry. While Los aforementioned AI Literacy Guide is apparently designed for library instruction, the AI Competencies document concentrates on the abilities that library workers should possess. Despite this different focus, the task force also associates their work with information literacy and notes early in the document that while developing the competencies, they recognized significant parallels between responsible AI use and the principles of critical information literacy, as outlined in documents like the ACRL Framework for Information Literacy for Higher Education (p. 1). This suggests the potential relevance of the document to librarians instructional work. Before engaging in a closer examination of the AI Competencies first draft, I should stress that upon releasing the document the authors solicited feedback from the library community to inform future revisions. At the Generative AI in Libraries (GAIL) Conference this past June, the task force co-chairs shared the feedback they received and the kinds of revisions they plan to make ( Jeffery and Coleman, 2025 ). Much of that feedback mirrors my own concerns about common conceptions of AI literacy that I have discussed thus far, conceptions that are reflected in the AI Competencies first draft as well. A considerable number of responses challenged the implications that library workers must use AI, that AI literacy necessitates AI use, and that responsible GenAI use is possible. Some also commented that the document did not adequately acknowledge GenAI technologies harms and that the description of AI dispositions (which I discuss in more detail momentarily) was not appropriate for a competencies document. The task forces receptiveness to this input which contrasts professional discourse about GenAI that I previously observed suggests that many in our profession may be eager and now better positioned for more open and honest conversations about GenAI technologies than in the earlier days of learning about them. Regardless of how the final draft of the AI Competencies document develops, the dispositions outlined in the first draft are worth closer attention because of the feeling rules about GenAI that they imply (for example, the expectation that competent library workers will embrace GenAI technologies and feel positively about them). [5] As the AI Competencies task force explains, the documents dispositions highlight the importance of curiosity, adaptability, and a willingness to experiment with AI tools (pp. 2-3). Library workers who demonstrate the appropriate AI literacy dispositions: Are open to the potential of responsible human-AI collaboration to unlock a future of greater equity and inclusion, Seek uses of AI that center and enhance human agency rather than displace and inhibit it, and Pursue continuous professional reflection and growth, especially concerning ethical and environmental responsibilities (p. 3). Implicit within these dispositions is the belief that use of AI tools in their current form can lead to greater equity and can enhance human agency rather than displacing it. The document does not discuss actions or responses one might take in light of the harmful impacts of GenAI technologies. Instead, questioning whether AI tools should be used appears antithetical to the AI competencies articulated in the document. Like many other AI literacy frameworks and guides, this document implies that reflection is sufficient for demonstrating the correct AI competency dispositions. Such rhetoric, not unique to this document, obfuscates the reality that people have limited control over or insight into what the AI companies that own most AI tools do to build and maintain them. When AI literacy documents assume GenAI use and come to dominate conversations about GenAI in academic libraries and higher education, or even become codified through formal adoption by institutions or organizations, how does this position library workers and educators who disagree with the assumptions embedded within those documents? Should these individuals be considered AI illiterate, in need of developing proper GenAI practices, attitudes, and dispositions? Through the lens of these documents, resisting rapid adoption of GenAI tools or questioning their value might be considered incompetence, regardless of how well informed or thoughtful someones perspective on GenAI is. The AI Competencies first draft provides a window into many of the feeling rules about GenAI currently circulating in academic librarianship. Fortunately they ultimately may not be codified in the final version. The task forces honesty and critical reflection about the academic library communitys feedback, including questions about the appropriateness of including AI dispositions, is evidence that feeling rules and the narratives that help to drive them are never fully solidified and are rarely universally accepted. Feeling rules are often sites of contestation. Moreover, they can shift and change as we learn more and as we engage in critical reflection and dialogue. New Imaginings for Responding to GenAI As the critical feedback on the AI Competencies suggests, alternatives to the dominant AI literacy discourse and its implied feeling rules exist, even when those different viewpoints are harder to find. As some educators demonstrate, when we challenge the feeling rules embedded in much of the higher education and library GenAI discourse, we can open new possibilities for thinking about and responding to calls for GenAI adoption and AI literacy instruction that promote this adoption. We can begin to imagine ways of acting that might be out of view when we are mired in a particular set of feeling rules about GenAI (rules that have largely been constructed by the tech companies that stand to profit from the continued use and growth of their data-extracting products). Charles Logan is among the educators going against the grain of AI enthusiasm and inviting us to think differently about common conceptions of AI literacy. Building on Nichols et al.s (2022) work on the limits of digital literacy, Logan interrogates the extent to which AI literacy is even possible, given GenAIs opaqueness and the hegemonic systems on which these technologies are built ( Logan, 2024 ; Nichols et al., 2022 ). Noting the assumption of AI use in AI literacy discussions, Logan cautions, An AI literacy devoid of power analysis and civic action risks becoming a talking point for Big Tech, and … a means for corporations like OpenAI and Google to set the terms of how educators and students think about and use their chatbots (Logan, 2024, p. 363). Instead, Logan proposes a more heterogeneous approach to generative AI that allows room for non-use and critical inquiry into GenAI. One pedagogical response is mapping ecologies of GenAI that illuminate its social, technical, and political-economic relations (Logan, 2024, p. 362). For example, Logan describes a classroom mapping activity developed by Pasek (2023), in which students locate a nearby data center and investigate questions such as, What potential land use, energy, or water conflicts might exist because of the data center? and Who benefits from the data center being here? Who loses? ( Pasek, 2023 , cited in Logan, 2024, p. 366). Drawing from the work of educators and scholars like Logan, librarian Joel Blechinger pays particular attention to dominant framings of AI literacy, which are connected to a longer tradition of presenting literacy as an antidote to intractable social issues and structural problems. Reiterating the question of whether AI literacy is possible, Blechinger asks librarians, to what extent are efforts to theorizeand proclaim a new era ofAI Literacy premature? Do these efforts instead reflect our own professional investment in the transcendent power of literacywhat Graff & Duffy (2014) have termed the literacy mythmore than the applicability of literacy to GenAI? Similar to Logan, Blechinger proposes that one alternative pedagogical approach could be to draw from a politics of refusal, rather than assuming AI use ( Blechinger, 2024 ). While some may have a knee-jerk negative response to the term refusal, the concept is more nuanced than one might first think. Writing and rhetoric scholars and teachers Jennifer Sano-Franchini, Megan McIntyre, and Maggie Fernandes, who authored Refusing GenAI in Writing Studies: A Quickstart Guide, describe GenAI refusal as encompassing the range of ways that individuals and/or groups consciously and intentionally choose to refuse GenAI use, when and where we are able to do so. Such refusal, they write, is not monolithic, nor does it imply a head-in-the-sand approach to these emergent and evolving technologies. Moreover, refusal does not necessarily imply the implementation of prohibitive class policies that ban the use of GenAI among students ( Sano-Franchini et al., 2024 ). This conception of GenAI refusal is aligned with the work of scholars like Carole McGranahan, who explains in Theorizing Refusal: An Introduction ( 2016 ) that [t]o refuse can be generative and strategic, a deliberate move toward one thing, belief, practice, or community and away from another. Refusals illuminate limits and possibilities, especially but not only of the state and other institutions. Such a politics of refusal, embedded in the fields of critical and feminist data studies, can be a source for imagining new possibilities, while being informed about the material conditions that underlie and shape technologies and technological use ( DIgnazio, 2022 ; Garcia et al., 2022 ; Zong & Matias, 2024 ). Sano-Franchini, McIntyre, and Fernandess act of refusal, supported by an extended analysis of GenAIs material impacts on society in general and on writing studies and higher education more specifically, can also be understood as a refusal to accept the feeling rules implied in so much of the discourse on AI literacy. The authors present ten premises on which they ground refusal as a reasoned disciplinary response to GenAI technologies. The first of these Writing studies teacher-scholars understand the relationship between language, power, and persuasion is especially relevant to considering the feeling rules that drive much of generative AI discourse in higher education and in libraries. The authors observe that the metaphors often applied to these technologies obscure the human labor that goes into GenAI training and ascribe human abilities to these technologies in ways designed to cultivate trust in corporate, exploitative, and extractive technologies. I would add that the messages we hear from our employers and other educators positioned as experts in GenAI and AI literacy further encourage us to trust these technologies over our reservations about them. Instead, Sano-Franchini, McIntyre, and Fernandes write, We must be critical of the ways that these metaphors and affective associations are used to exaggerate the abilities of these products in ways that strengthen the marketing efforts of Big Tech corporations like OpenAI. With this criticality, writing studies scholars can use language that most accuratelyand transparentlyreflects the actual technology and/or … [highlight] the discursive limitations of the language … [they] commonly use to describe these products. The authors draw attention to the economics behind GenAI and the ways it is promoted and marketed. Asking us to examine who truly benefits from the increased use of GenAI in higher education, they note that people in the EdTech industry have largely shaped this discourse (for example, articles in InsideHigherEd and The Chronicle of Higher Education written by individuals with close ties to the EdTech industry). Such examinations of the language used to discuss GenAI in higher education help to illuminate what usually goes unspoken in that discourse. Sano-Franchini, McIntyre, and Fernandess critical examination of GenAI can be seen not just as a refusal to adopt GenAI technologies into their teaching. It is also a refusal to follow the feeling rules behind much of GenAI discourse. It is refusing to be shamed or to doubt oneself for having concerns about the value, ethics, and potential impacts of the GenAI technologies being so heavily promoted at our institutions. The authors choose critical thought over compliance with mandates that stifle critical inquiry and dialogue. Regardless of whether an individual or a group adopts a stance of GenAI refusal (a position that the authors stress looks different in practice for each individual in their context), examining and questioning the feeling rules implicit in much of GenAI discourse better enables us to make more intentional and informed choices about how we do or do not use these technologies and how we teach about them. Examples of librarians challenging the feeling rules of dominant GenAI discourse exist, even when they are outliers. ACRLs invitation to Ruha Benjamin to give the 2025 conference keynote is just one example of an interest within our profession to hear more critical perspectives. Library workers feedback on the ACRL AI Competencies for Academic Library Workers is another. Some librarians are also vocalizing a need for slower and more critical investigations into GenAI tools, even when doing so risks social ostracism. In the April 2025 issue of College & Research Library News , Ruth Monnier, Matthew Noe, and Ella Gibson candidly discuss their concerns about the GenAI tools that are increasingly being used and promoted in their organizations. Drawing attention to both the hype and the many ethical questions surrounding GenAI, they note the unpopularity of expressing reservations about adopting GenAI into libraries. Noe reflects, the hype cycle is real and here it often feels like the choices are to get on board or lay down on the tracks to become part of a philosophy joke. Monnier concurs: I agree it is weird how fast universities, corporations, and individuals have pushed for the adoption and usage of generative AI, especially in the context of the rhetoric about how bad social media and cellphones are within a K-12 environment. What makes this technology so unique or special that we as a society feel the immediate need to use and adopt it compared to other previous technologies? ( Monnier et al., 2025 ). The scope of this article does not allow for a close examination of such work, but additional resources in which library workers challenge the dominant feeling rules of GenAI include Joel Blechingers Insist on Sources: Wikipedia, Large Language Models, and the Limits of Information Literacy Instruction (2024), Violet Foxs zine A Librarian Against AI (2024); and Matthew Pierces Academic Librarians, Information Literacy, and ChatGPT: Sounding the Alarm on a New Type of Misinformation (2025). Such work does not deny the fact that GenAI tools exist, nor does it suggest we can or should ignore these tools existence. It does open space for thinking more critically about the actual capacities and impacts of GenAI and making more intentional and informed choices about how we (dis)engage with GenAI technologies. Many in our profession likely will not agree with much of what I have said here, but regardless of our individual views of GenAI technologies, I hope we can all agree that we value critical inquiry, and that an essential part of that process is making space for a consideration of varied perspectives and experiences. Critical inquiry and dialogue become possible and richer when we investigate the feeling rules that may be shaping, and sometimes limiting, professional discourse and practice. As we expand critical conversations about GenAI, we have more power to imagine the futures we want to build and cultivate, as Ruha Benjamin invites us to do. In the spirit of collective imagining, I close with some questions I would like to collectively explore in and beyond libraries. I have organized these questions into two main areas: professional conversations and interactions and our teaching practices. Professional conversations: How can we be more inclusive of varied perspectives in our conversations about GenAI and related work, as we acknowledge the challenge of speaking honestly when one disagrees with dominant framings of GenAI and AI literacy? How can we more critically examine our discourses and dialogues about GenAI, as we identify areas that may be unclear, inaccurate, or based on assumptions that need further investigation? How do we practice a culture of care in these dialogic spaces and engage in constructive critique of ideas, not a critique of individuals? How do we align our discourse about GenAI and related work with our professional and personal values, including those articulated in the ALA Core Values of Librarianship and the ALA Ethics of Librarianship? How do we preserve time and energy for valuable work that may not be centered on GenAI, and that has been deprioritized because of the presently dominant focus on GenAI? Teaching practices: Historically, what have we valued about librarianship and information literacy education that still remains vital to us? How do we continue our engagement with those dimensions of our work? What agency do students, faculty, and library workers have in whether/how they choose to use GenAI tools? What might it look like for teaching about GenAI technologies to allow for choice in whether and when to u…
Deja un comentario