A conversation between Maggie Fernandes, Megan McIntyre, Jennifer Sano-Franchini, and Jay Todd on refusing generative AI in writing studies.

Maggie Fernandes (she/her) is an Assistant Professor of Rhetoric and Composition at the University of Arkansas.
Megan McIntyre (she/her) is an Assistant Professor of Rhetoric and Composition and Director of the Program in Rhetoric and Composition at the University of Arkansas.
Jennifer Sano-Franchini (she/her) is the Gaziano Family Legacy Professor and an Associate Professor of English at West Virginia University.
Links for this episode:
Transcript:
Note: This transcript was generated by AI; it has been proofread (by a human) lightly but may still contain errors.
Jay Todd
Welcome to Teaching, Learning, and Everything Else where we explore issues in the realm of higher education. My name is Jason Todd, and I am the director of the Center for the Advancement of Teaching and Faculty Development and Kellogg Professor of Teaching at Xavier University of Louisiana. In this episode, we explore the thought provoking refusing generative AI in writing studies manifesto with its authors, Jennifer Sano, Franchini, Megan McIntyre and Maggie Fernandez as generative AI tools like chat GPT become increasingly embedded in higher education. The authors challenge the assumption that adoption is inevitable or beneficial. They argue for a critical, principled refusal, one that recognizes the power dynamics, labor implications, environmental costs and potential erosion of linguistic diversity that come with these technologies, far from being a reactionary rejection of innovation, their stance is rooted in the long standing values of writing, studies, fostering critical thinking, ethical engagement and student agency. Whether you're an educator grappling with AI policies, a writer concerned about the future of language or simply curious about how technology shapes the way we communicate, this conversation is essential listening. Stay tuned as we unpack the complexities of generative AI in writing classrooms and why refusal might be a more nuanced and strategic response than we think.
And so we've got we've got the the authors of the Refusing Gen AI in Writing Studies, and we really just wanted to hear a little bit more about your thoughts on this, and to pose some questions for you all in terms of how some of our faculty in particular might think about some of the ideas that you've expressed here. So if I could start with kind of a maybe kind of a basic question, but I think one that actually a lot of people have asked me when I've kind of mentioned that I was going to be doing this interview, which is, could you explain really what refusal means in the context of generative AI, especially in the realm of writing studies.
Megan McIntyre
My name is Megan McIntyre. I'm the director of the program in rhetoric composition at the University of Arkansas, and my research focuses on equitable writing classrooms and pedagogies and programs as well as digital technologies. So I think that's a question we sort of came together on. One of the reasons we started chatting with each other was, I think trying to answer that question for ourselves. For me, it's about being really well-informed about what these products and technologies are and do and the consequences for language diversity and the environment, and then having the option and making an intentional choice for me about not using them based on all of the things I know, about their impact on linguistic justice, their impact on language diversity, their impact on the environment, and their impact potentially on critical thinking and learning, which, as a writing program administrator, is something I'm really invested in, right? I think writing programs really are about, yes, developing writing skills and processes, but they're also about developing thinking skills and processes. And I worry about the ways that these products are coming between students and the thinking work as well as the writing work that I so badly want for them, because I think it helps them grow as thinkers, as learners, as — you know — cognizant citizens in the world.
Jennifer Sano-Franchini
I'm Jennifer Sano-Franchini. I'm an associate professor at West Virginia University, and my research and teaching interests are in the intersections of cultural and digital rhetorics, Asian American rhetoric and technical communication. Just to add to that, for me, I think of refusal as the kind of like multiple ways that people choose to question and resist the marketing rhetoric and marketing claims about generative AI and choose to ask questions about, like, what kinds of support for these claims do we have? And as as a result of that understanding, oftentimes, making the intentional choice to not use these products when they have the option to do so, whether it's in their teaching, research and service. For me personally, it doesn't necessarily mean trying to control other people's behavior as much as we want to, you know, raise awareness and have people be more informed about these things. So I think that that's something that is important to highlight. But also, you know, the point that I make is it can mean a lot of different things, but at its core, it's asking these kinds of critical questions about the claims that are being made about these products.
Maggie Fernandez
And I'm Maggie Fernandez. I'm an assistant professor at the University of Arkansas, and my interests, my teaching and research interests, are in digital and cultural rhetorics with focuses on generative AI and social media. And I also see that refusal asks us to slow down and to take a minute. Because, you know, when these technologies became really publicly available, it was a lot of panic and hype, and I see both of those orientations to it as basically doing the same thing, which is looking to the future of these technologies and making a lot of assumptions when we have time to make choices, and we make better choices when we take our time, so not redesigning courses based on speculation, not centering a lot of policing approaches to it. Like Jen was just saying, I'm not really that interested in controlling other people's behavior. I want to talk about, what does it mean to think about these tools as they are right now and how they are affecting a number of different things. And we can have better conversations when we talk, when we talk about concrete things that we know, which is the present.
Jay Todd
And you all talk in this about, I think you call it the Doom/Hype fallacy, or scenario of it's either going to ruin everything or it's going to be the cure for everything. Is that a fair definition?
Maggie Fernandez
Yeah, and that line of thinking comes from people like Timnit Gebru, who's been leading the conversation on algorithmic ethics and the ethics of artificial intelligence for a long time. And the point that she really makes about that is both of those orientations benefit Big Tech, and they don't tell us a whole lot about what we do now. And it it kind of serves to — my cat is making this crazy noise. [beats cat senseless] Essentially, it keeps us kind of distracted and thinking about inevitability of these tools instead of what we actually can talk about, which are the unethical ways that they're made, how we can interact with them, what questions we need to be asking, and all that.
Jay Todd
That's great.
Jennifer Sano-Franchini
I was just gonna add that I think a critical point in relation to that is that doom is — we understand it, or I understand it as those types of speculative like, oh my gosh, it's going to take over, it's going to ruin the world. But it doesn't necessarily mean that we can't identify the current harms of these products. I don't think that's doom. I think that that's, you know, talking about reality and what we see happening right now. So I think that's a distinction maybe to be made in the conversation.
Megan McIntyre
Yeah, I think anything that leans into the inevitability discourses, whether those inevitabilities are really positive. It's general intelligence, right? And it's going to revolutionize X, Y or Z, or it's going to take over and ruin, you know, A, B and C, either one of those are really serving an inevitability. And as Maggie said, that sort of future looking and speculation really only serves to further embed the products themselves in at the center of the conversation. And for me, the center of the conversation is not the technology. Like, you know, I've been in the discipline for a bit, and I have read all of the amazing things in Computers and Composition. And, you know, we have written a lot about technology in the discipline, but the ways we write about technology are about centering students or centering rhetorical choices and goals, or centering purposes or audience. It's not about centering the technology. And I think the hype and the doom require us to continually recenter the technology itself and the companies that are selling them to us. And I think that's the mistake that I don't want to make. I want to keep human beings, students and teachers, at the center of the conversation. And to the extent that refusal lets me do that, that's one of the reasons it's the position I've taken, because it feels like a way to re-center people, and that is the most important thing for me.
Jay Todd
I mean, to tie it together, I mean, when we talk about refusing AI, you know, is it maybe correct to say that we're really talking about having the right to refuse AI? And then I guess, could you talk a little bit about — I know this is kind of directed towards faculty, writing instructors in particular — that right extends also to students too. And I think you all talk a little bit, quite a bit, actually, about how that needs to be considered in the classroom as well.
Maggie Fernandez
I think we have, like — I think there's really totalizing conversations about student interests in these technologies. And to be sure, there's probably, like, more familiarity than I would like, you know, maybe students are experimenting with these things more than I, as someone who doesn't want to play around with them, would like them to. But at the end of the day, I think there's a lot of over assuming how much students are already, or are interested in, using these tools. And so for me, it becomes a question of, can we slow down and not embed interest, embed orientations into how we're presenting them, to presenting these technologies to students and creating options and creating a space to talk about them as if we're not farther down the line than we are, and oh my gosh, I may have forgotten where we were going, I'm sorry!
Megan McIntyre
To add to what Maggie is saying, I think for me, refusal is at its heart, like a reclaiming of my own agency in the conversation and a re-centering of students' agency. I believe I still have choices, and I believe students still have choices, and I think to the extent that inevitability discourses do a lot of things that we've already talked about, one of the things they do is they presume to know what choices have to be made and what choices are already foreclosed to us. And I just — that's not my experience in the classroom. I'm teaching first year tech comm to engineers this semester, and I have computer engineers in my class. This has not taken over their life in the ways I think has been sometimes projected on to students. They're having really thoughtful conversations about what it means to be in college right now, you know, a little over two years, post-ChatGPT public drop, and those conversations are nuanced and they're thoughtful, and they're broad and they're varied, and they're all making choices. And so I think, if anything, in addition to that slowing down, it's the recognition that we still have choices to make. We are not inevitably in any single place, I think, and students should have choices too, and I think that's also the heart of it.
Jennifer Sano-Franchini
For me, yeah, I think that you really hit the nail on the head when you said it really is about the right to refuse, whether for faculty or for students. For me, I am really uncomfortable with the practice of requiring students to sign up for these types of products, especially assuming that they are, like, these Big Tech commercial products. So like for me, it's the decision to not ever require students to do that and to make, well, I guess, since I don't require, but, like, you know, just like making sure that option is there, but also, I guess — I hesitate to say, so I just want to cut myself off there.
Jay Todd
That's good. You all talk about refusing generative AI in writing programs, which is your areas of expertise. Does this translate over into other disciplines? Can that idea of refusal really just map into mathematics, history, biochemistry?
Maggie Fernandez
I can't speak to other areas and disciplines and how, what their classrooms look like, but I think what you said about the right to refuse, whether as a student or instructor, I think that applies across the board, and in the same way that I think that — just because there are still things to learn and practice and value without generative AI that I think, probably, you know, I teach a lot of rhetorical awareness. I think that probably my students, when we talk about rhetorical awareness and we talk about context and authorship and audience, I think that those things probably could help them be better users of these technologies, even if we're not using them in my class, like if, just because I'm not training them to do that. And I think that, I think that probably applies across disciplines that sustained thinking and practice in an area will only make people better critical thinkers and better critical users of technology. And so I think that what refusal looks like in other areas will be dependent on what those areas and disciplines are doing. But I generally think that probably there — I think it's good for people to start from the idea of refusal, if refusal is a way to think about harms and to think about what, how do these technologies, products, these companies, how do they impact what we do and maybe work against our interests? I think if we start from refusal, we can think very clearly about, okay, so what is every step to adopting look like? I think that's probably useful across a range of discussions.
Megan McIntyre
I think it also asks us to recenter like whatever the goals of your classroom, program discipline are, again, like de-centering the technology and re-centering whatever it is that you're aiming at. And so, like, I don't know other disciplines, but I have to believe that the technology is not the center of mathematics education, that there is something at the center of that, that the discipline and the teachers in the classroom have goals and desires for students to understand and to participate in particular discourses about, and to the extent that refusal helps them keep their students or their goals at the center, that I think, is one of the things that I — when I'm in these conversations, you know, with people outside of that comp [?], that's what I'm saying, is like, I'm genuinely, like, everybody has already said I'm not here to make decisions for other people. I'm here to make sure everybody knows they're still making decisions. Like, that's the question I'm asking, and that's what I'm trying to put forward, is you're making a choice. What choice do you want to make?
Jennifer Sano-Franchini
I was just going to say as well, building on that, and what Maggie was saying about kind of like beginning with refusal, is that it makes me think about how that stance or that orientation is kind of subversive in a time when we're in a culture of, like, automatic opt-in with a lot of technologies, and so it is, like really kind of asking us to question, like, the way that we're oriented in relation to technologies, and what kinds of agency can we reclaim in the face of them, as well as in the face of the rhetorics about them, the claims that are made about them. But to kind of, like build on the question of how this might be applicable to other disciplines. I totally agree that there are many different types of products that may be used or that may be used in different ways, in different disciplines, but some things that I think hold true are the things like slowing down, questioning the rhetorics, the marketing claims about the products that we use, and thinking meaningfully about the negative, the harmful implications and impacts of these technologies before making these intentional choices. I think that that's something that people across disciplines can and should do.
Jay Todd
And kind of speaking about, kind of looping together some of these responses I'd be curious to hear — and this is kind of partially kind of selfish, because I found out this week that I've been asked to chair a committee to put together a policy on generative AI for our school. So I'm curious to hear either — I'll ask two questions, depending on what you all are comfortable talking about. Do your schools have policies about generative AI that you see kind of allowing for this idea of refusal. And what can other schools who are maybe still figuring out their policies, what can they do and what should they say in those policies to make space for this approach.
Jennifer Sano-Franchini
I was really interested to see that at West Virginia University, we do have a policy on use of generative AI for administrative purposes. This is something that I just found. I don't remember how I found it, but I don't feel like it was sent to me, but I found it really interesting, because I felt like it makes some really good points about when it may or may not be appropriate to use these technologies. So it says things like, administrators should not use generative AI for communications that involve personally identifiable information, for situations where the authenticity and originality of the communication is really important, for situations that require empathy and deep understanding. So those types of factors I think are important to consider, I don't, you know, I don't think it takes a wholesale refusal approach, but I do think that it does ask folks to think carefully about the kind of ethical consequences of how and when we use these things.
Megan McIntyre
Yeah, so we — there are some resources, the Faculty Center has some resources on policies for the university, but there's no university-wide policies that have been communicated out to the faculty. Programmatically, we have policies. So Maggie and I work together to create some syllabus language and some direction with a sort of, "don't use it" framework on the one hand, or a "you can use it in limited circumstances that I will articulate" to" framework, and then the faculty member has to, or a TA has to, sort of write out where they find it acceptable to use, students have to cite it, and the faculty is responsible for talking about some of the present harms. And we sort of gave them a list of resources and things they could teach with, but they have to devote time to it in their class and sort of really grappling with it so students can make more informed choices. What felt more important to me, though, than those policies for students was we have a policy for instructors. So when we first wrote that policy, there was no — the university hadn't purchased an AI detector yet. So at the beginning of this academic year, they purchased TurnItIn's AI capabilities. We switched from SafeAssign to TurnItIn, but when we first wrote the policy, we didn't have that. And so my big concern as a WPA was I don't want faculty feeding student work into these tools. Not only do I think that's a FERPA violation, I don't think that's ethical. Like our job as writing instructors is to give detailed feedback to students, and to the extent that we do that, we do it. And to the extent we don't do it, or let something else do it, we aren't doing what I think we are there to do. So yeah, that policy was more important to me, so that's what I would encourage other people to think about like, what are your guidelines for yourself as an instructor or a, you know, graduate assistant, or whatever your role is, and then, if you're a WPA or a program, you know, in a program situation, how are you helping people find ethical, you know, boundaries around their own use of it before we are sort of jumping straight to what students responsibilities are. What standard are we holding ourselves to around these products — aeems as important a question, or maybe a more important question, from my perspective, than what boundaries are we asking students to stay within?
Maggie Fernandez
The other thing I will add, I really agree with all that's been said so far — the only thing that the University of Arkansas has said so far about AI, is that they have updated the academic integrity policy to account for — which I was I was pleased with this. It basically says, it's academic integrity, you need to follow the policy of the instructor, which I think is about as — I was worried that it would go too far in one direction, either way, and so I think that is a good foundation, because it centers instructor choices to either refuse or adopt. But the thing to be mindful about with that is that means that students are navigating a lot of different, varied rules on it, and maybe give being given, you know, mixed messages about what they're interacting with. And so if I was going to make a more robust policy about it, that would kind of eliminate that, I would be worried about what that would do, because it seems like that would skew toward adoption, probably. But I would also want to see some things that are really clarifying what is counting as AI. A lot of students don't think about Grammarly as being part of AI. And some universities have, like, identified, like, okay, Grammarly is against our policy that's now using Grammarly is an academic integrity in violation. And, you know, academic integrity is, like a whole big conversation, but I think, like, as, like, we're making more specific policies, those are the kinds of things that I would want to know. What counts is a question that my students would probably have and that I have. How much is citation required? Is transparency the standard? And then the then the question for me becomes, how do we even enforce that? Which is why, although in my individual classroom, I discourage use and talk about refusal, I'm not really taking a whole lot of efforts to center catching people or correcting people. I teach them how to cite it, and I say I'd rather you didn't, because getting too far down that path, it becomes the whole conversation. And I don't know that that's what I want to focus on.
Jay Todd
That's great. Thank you. I guess just one last question to kind of wrap things up. Do you have plans going forward with the resources you've got online, and we'll share links to all those in our show notes. AI is a constantly, so far, evolving kind of thing. Do you see your work about refusal having to constantly evolve going forward as well?
Megan McIntyre
I mean, I think it will. I also think we have really tried to take the tack of asking a lot of questions, and I think those questions remain useful even as particular technologies or products evolve and change. And also, I think we've sort of recognized in various places, in these resources, that people have different kinds of agency in different circumstances. And that's also going to keep changing. I am now at the University of Arkansas, but I spent four years at a CSU, at a California State University campus, and CSU just announced a partnership to bring generative AI products into every single campus with licenses for every single student and faculty member. And I suspect, having chatted with some of my former colleagues in the CSU, that that is going to make the questions and the pressures different than it was before they had these official partnerships with OpenAI, Google, Microsoft, you know, and various other players on the Gen AI market and so and also, like resistance, refusal, probably looks different too. The union is involved now because it is obviously there are — one of the harms has to do with labor, which is something that people have been talking about for a long time, including Gebru, but also the SAG-AFTRA strikes, right, centered these sorts of questions too. So I think that that part's evolving, but I do think our question-based orientation is probably going to make some of these things useful, even as the evolution changes. Evolution happens and changes happen, because I think some of these questions. Really are about what you value, what you center, what you choose, and those questions, I think, persist.
Jennifer Sano-Franchini
I agree with that, and I was thinking that that applies to the 10th premise that we offer in our Quick Start Guide, where it is about weighing out what is happening? What are efforts in place to address the harms that at least currently exist with generative AI, and then being able to, you know, adjust your choice accordingly. And I think that those, that kind of like orientation, probably is applicable even, you know, with other technologies. One thing that I think of with regard to these conversations is this kind of ethic and technical communication, where there is this idea that it's not necessarily the best approach to really center particular products, because in ten, fifteen years, those products are going to be completely different. There's no way to really accurately predict what skills or what products students are going to use in their future careers, and so we need to think about what are the kind of core skills that they need to help them adapt as technologies change and be able to assess and critically analyze those technologies. And I think that that orientation is one that informs the way that we approach the refusal website.
Jay Todd
I think that's great. And you all stress quite a bit, the need for, I think you call it critical digital literacy, as opposed to some sort of much more focused AI literacy.
Megan McIntyre
And I think that's — I was gonna say, I think that's really in line as Jen was saying, right, with a disciplinary orientation toward technology, in tech comm, in Computers and Composition, dating all the way back to the sort of founding of the discipline. And I think a lot now about Walter Ong's work all over again, right? "Writing is a technology that restructures thought." You know, that is a useful place for me to go back to, because writing itself is a technology. And the questions we ask about technologies are in some ways perennial. They shift and they get complicated and they get nuanced and all of these things. But, you know, there are questions, and I don't think centering the technology, as Jen's saying, is productive. I think centering rhetorical awarenesses, audience awarenesses, value systems and cultural values that are embedded in these systems, relationships to power — those are sort of evergreen questions, and don't necessarily ask us to focus exclusively on one technology or product or another, but let us sort of understand the ecosystem of technologies that we participate in when we do writing work or rhetorical work?
Maggie Fernandez
Yeah, I totally agree with that. And I also think that the value of refusal isn't — what's been useful for me in thinking about refusal of Gen AI is that it's helped me to take a moment, take a beat, and turn those questions back on other technologies that have become so embedded in my daily life, including things like Zoom, which we write about in the premises where, if I have questions about the kind of resource usage of Gen AI, what other like digital technologies am I interacting with, and how can I be more sustainable in how I use them? And I think that that's like, kind of, I think, a beautiful and, like, optimistic part of refusal for me. Refusal doesn't sound very optimistic, but I think that it, it can be a challenge to be more critical about Big Tech more broadly, and how we use them in our everyday lives.
Jay Todd
That's great. I think that's a great point for us to end on. So I want to thank you all Jen, Maggie and Megan, for your time today. And like I said, we will provide links to all your resources in our show notes and encourage folks to spend some more time learning about your ideas.
I've been speaking today with the authors of Refusing Generative AI in Writing Studies. Jennifer Sano-Franchini is the Gaziano Family Legacy Professor and an Associate Professor of English at West Virginia University. She currently serves as Chair of the Conference on College Composition and Communication and is a member of the MLA Rhetoric, Composition, and Writing Studies (RCWS) History and Theory of Rhetoric Forum Executive Committee. Megan McIntyre is an Assistant Professor of Rhetoric and Composition and Director of the Program in Rhetoric and Composition at the University of Arkansas. She is the creator and host of the Everyone’s Writing With AI (Except Me!) podcast with Maggie Fernandes. Maggie Fernandes is an Assistant Professor of Rhetoric and Composition at the University of Arkansas. Her work has been published in Composition Studies, Enculturation, Kairos, and Reflections: A Journal of Community-Engaged Writing and Rhetoric. Please see the show notes for this episode for links and further information. Thanks for listening.