Skip to content

Conversation #124: Jon Ippolito on the IMPACT RISK of AI

A conversation between Jon Ippolito and Bart Everson on a handy mnemonic to help you remember all the things that could go wrong with artificial intelligence.

Jon Ippolito is an American artist, an educator, a new media scholar and a former curator at the Guggenheim. He studied astrophysics and painting in the early 1980s and then pursued internet art in the 1990s. He's a professor of New Media at the University of Maine, where he teaches classes on programming, online culture, variable media, and viral media, and now, of course, artificial intelligence.


Links for this episode:

Transcript:

Bart Everson
This is Bart Everson, and I'm here today with Jon Ippolito. Very excited to have him on the program. Just to tell you a little bit, we'll draw a little bit from the Wikipedia article. He's an American artist, an educator, a new media scholar and a former curator at the Guggenheim. He studied astrophysics and painting in the early 1980s and then pursued internet art in the 1990s. He's a professor of New Media at the University of Maine, where he teaches classes on programming, online culture, variable media, and viral media, and now, of course, artificial intelligence. So welcome to the program, Jon, and maybe you could tell us a little bit about what you are teaching these days, with regards to AI.

Jon Ippolito
Thanks, Bart, great to be here. So I teach the — well, I think later today we're going to get into the ethics of AI, but I do also teach the mechanics of it, so that gets folded into courses on other topics, for example, in courses on making websites or mobile apps. But in addition, I've collaborated with some colleagues at UMaine, notably Greg Nelson and Troy Schotter of the Computer Science department, in a course where we really tackled AI from the get-go. And this is an Introduction to New Media course which can have different topics in different years. And the last couple years, we have asked students to do eight tasks across the range of new media: animation, games, storytelling, web design — without AI and then with AI. So the whole premise of the class was to investigate how students tackled the difference between these two approaches, what their perception of it was, how we could sort of guide them to get the best possible results out of the AI versions, but also to sort of evaluate what the pros and cons were in each case.

Bart Everson
Awesome. Now that seems like it could be a good approach for anybody who teaches creative process. I wanted to ask you specifically, and kind of the main reason behind inviting you for this interview, was about this IMPACT RISK framework that you've developed, kind of a mnemonic device to help students understand some of the impact and some of the risk behind artificial intelligence, or associated with artificial intelligence. Assuming that our listeners might not know anything about this, can you give us a rundown?

Jon Ippolito
Yeah, so I give a lot of talks, obviously teaching students and so forth, and oftentimes I'll focus on one particular area of AI. So for example, its impact on archives, or how image diffusion models work. And in all these cases, it's tempting to get into the weeds. I like to learn the details of any particular technology and how it impacts a particular field. But what gets lost sometimes in those conversations is people get all excited about the shiny new gadgetry and lose sight of the bigger picture and all the possible downsides. So I was having trouble remembering them, to sort of spout them off at the beginning of a talk, or, you know, help lead students through that. So I decided that the best way to sort of keep them in mind would be to develop an acronym, a mnemonic, that could be used to just sort of rattle off those downsides to remember them and bear them in mind. So IMPACT RISK is the letters I-M-P-A-C-T-R-I-S-K, and each one stands for a different potential problem with AI, right?

So we do hear a lot about the upsides, especially from tech companies and sometimes from journalists. But if you think about the idea of disinformation and political deep fakes, as well as sort of the increasing role of cyber attacks that are custom-written with ChatGPT and the like — that in the acronym falls under I for Infowar, right? So deep fakes is something pretty much everybody's heard about.

One thing we tend less to talk about, but it's also important as a potential downside of AI, is M for Monopoly. So again, there's the first two letters of IMPACT. Monopolies rear their heads in a really dramatic way. We saw that already in social media, where there were like five or six companies that really controlled what we saw from Netflix to Amazon to Google and Apple, but now we're at the point where it's really shrunk down to more or less three players and some sort of side options. You've got Google vying for dominance against the Open AI/Microsoft partnership. You've got Meta originally kind of trying to prevail in the open source domain, and now kind of expanding. You even see an even stronger concentration of power when you get to the AI chip market, where Nvidia is just dominating, right? All the competitors are sort of scrambling for crumbs in that market, and this is due to the massive amount of computation required to compete, train models and deploy them at inference. Those kind of brittle monopolies can stifle entrepreneurship and innovation and lead to other economic issues.

So along with Infowar and Monopoly, every one of the other letters in that acronym spells out other harms: Plagiarism and privacy, Automated labor, Climate impact, Tainted data, Reality distortion, Injustice, Stereotyping, Knockoff experiences. These are all kind of pieces of the AI puzzle, and they're all things that are easy to forget in the hype bubble that surrounds a lot of journalism and conversations about generative AI.

Bart Everson
Yeah, and when you talk about — what I'm struck by, when you mentioned that you were having trouble remembering these things easily, is how many there are! These are ten big downsides, and each one of them encapsulates a lot of things. But you're not adamantly opposing the use of artificial intelligence. I understand you have a kind of more measured view. Would you say that's correct?

Jon Ippolito
Yeah, I think so. I mean, I've heard this perspective sometimes described as an "AI realist." Maybe that's sort of a self-serving designator. But I do not fall in the camp that says these things are useless and have no applications. Nor do I fall in the camp that, you know, they are either super-intelligent beings or replacements for human creativity or ethics. So my view is that we need to understand how they work, and in order to understand how they work, we need to try them out. But I've learned that, especially as a teacher, that sometimes students don't listen to what you say. They watch what you do, right? So if you use ChatGPT in the classroom, they will use ChatGPT in the classroom, even if you're showing it, you know, you're demonstrating that it's doing something wrong.

So I need to have some kind of, I felt like I needed to have some kind of foil or sort of backstop to sort of remind people like, okay, I'm gonna show you this, but don't forget this. You know, IMPACT RISK, and I've also created some mechanisms for other teachers to use this in their classrooms. So there's an interactive website, there's a video, there's a quiz, there's a whole sort of lesson plan that you can integrate into many common courseware modules like Brightspace and Blackboard and Moodle and Canvas. So those are all ways I've tried to sort of expand the potential resources that people can kind of turn to if they also, like me, feel it's really hard to remember all of these and convey them to students.

Bart Everson
Great. I'm glad you mentioned that. Of course, we'll put the link in the show notes, but people should know they can go to AI-IMPACT-RISK.com to see this. And the graphic really does make it kind of come alive. I was wondering — and of course, all the other extras that you mentioned. I was wondering if you could speak just a little bit to the creative process that you went through to put this all together.

Jon Ippolito
Yeah, so, you know, in sort of classic form, I try to see what I could do with my own thinking, and it was a struggle to create an acronym that encompassed everything. So I turned to, I think at the time it was GPT-4, and said, like, hey, come up with some kind of acronyms. And we went back and forth quite a ways. I think that the version that I came up with was not exactly one that it had chosen, but it did give me the words that I was looking for. And so I was able to sort of customize the — or I should say, the initials — and I was able to customize the sort of phrasing, and then I wrote the kind of descriptions of the harms in more detail myself. I've been a writer a long time, and I'm kind of fussy about words, so I rarely accept what a chatbot gives me. But in the brainstorming it was actually super helpful.

Bart Everson
All right. And you've got a nice graphic there as well, depicting a classroom scene that's AI-generated, I'm sure.

Jon Ippolito
That's right. So the the the illustration that shows, sort of like school kids at their desks with the giant scary words IMPACT RISK and a sort of desolate landscape out the window that is generated by leonardo.ai, I believe, a stable diffusion model that was one of their earlier models. But all the little infographics and little icons and so forth, you know, I made those by hand — again, because I'm sort of fussy that way.

Bart Everson
Fantastic. Most of our listeners are college educators, and I know a lot of them are struggling to communicate to their students, or even, first of all, to understand artificial intelligence and its capabilities and the impact and the risks, but also, then to share that, communicate that with their students, integrate it in some kind of way with their teaching. And I was particularly, you know, what really prompted me to reach out to you and speak to you was the idea that you've published this as a module that can be imported into an LMS. So, since a lot of our listeners might actually be ready to do something like that, I mean, they might be the actual audience that you're looking for with that, I wonder if you could speak a little bit to what you have in mind with that move.

Jon Ippolito
Yeah. Yeah. So I'm not actually a user of traditional courseware. I tend to try to adapt my class's software to whatever the students are likely to use out in the world. And shockingly, you know, basically, a very, very small percentage of the workplace would actually use something like Blackboard. Those tend to be marketed specifically at, you know, high schools and colleges. So we tend to turn to, in my classes, tools like Slack or GitHub, that are more common in environments that the real workplace is going to be using. That said, you know, a lot of educators these days do use tools like Blackboard and Brightspace and Moodle and Canvas. Fortunately, Canvas has a sort of interoperable archive tool that lets you create a file that can be imported into most of the kind of prominent coursewares out there, and it's called an .imscc file. I don't remember offhand what it stands for, but essentially, you can pull it in by importing it into your own course, into your own syllabus, and it will populate the video, the quiz, the website links and so forth automatically. And then you can move them around to the days you want them to be in, or, you know, when and how you want to deliver them to students. So you can think of it as kind of a — trying to think of what the equivalent would be — sort of like if you had an instant coffee or something where you just add water and it sort of like populates a cup of coffee. In this case, it'll sort of like populate a lesson plan in your syllabus. You can find those directly on the site that you mentioned, AI-IMPACT-RISK.com, or you can find them in OER Commons or Canvas Commons. There's a number of other sort of resources that are devoted to shared, open educational resources online, and I've submitted it to those. I also, just for kicks, made a Google Sheet with the quiz text for people who just want to use their own system can just grab the quiz directly from that, and the video is available on YouTube, or it's a standalone MP4. So basically trying to give you a menu of all kinds of different options that you can choose from.

Bart Everson
That is really great. And I should mention for any listeners who might be on our campus, Xavier University of Louisiana, where we use Brightspace, that you did publish for — this is compatible with Brightspace, to be imported into our LMS. It works for that. And, you know, that LMS has pretty much been mandated for all of our faculty to use, whether that would be their first choice or not. And one of the reasons has been the continuity, instructional continuity, because of the occasional hurricane —

Jon Ippolito
Oh my gosh!

Bart Everson
—that comes through. This has happened, and we've experienced that before, and so this is really something that the institution emphasizes. So it's good to know that this kind of content is available, and I've never really seen anyone do something like that, but I'm sure that that's something that our instructors might appreciate.

Jon Ippolito
So I want to give credit where credit is due, Bart, to the group that you and I are part of, the "AI in Education" Google Group, because that's where I saw people creating these kind of Canvas modules that were interoperable. And I was like, oh, I like that idea. So, you know, I kind of investigated how to do that. I would also say that these are all Creative Commons licensed, but they're licensed in the public domain, which means they can be reused without attribution. You don't need to say this came from Jon Ippolito. I've also kind of atomized like all the little graphics that are used in the infographics, the little icons and so forth, are available as individual files, SVG files, so you can just make your own version. You can change the graphic. You can change the acronym. The goal is to just sort of like get educators and students thinking about how to document and share and start conversations about the ethics of generative AI.

Bart Everson
Great, and I hope some of our faculty will do that. I was thinking that if time allowed, and it looks like it certainly does, that we might be able to also talk about another site, kind of similar, that you've developed, called "What Uses More," that is also related to artificial intelligence, but really coming at it from quite a different angle. Can you share that with our listeners?

Jon Ippolito
Yeah, so many of these projects are born from my own frustration, right? You sort of like have this itch and you need to scratch it, and then you share what you created with other people. I was really frustrated trying to investigate one particular aspect of the IMPACT RISK acronym, that is the C for Climate impact. And this is because you can find that the internet seems to be completely polarized about this. People are either, like, it's no big deal: it's nothing compared to other uses, and it's just a tiny, tiny drop of water to generate, you know, lots and lots of LLM text, versus people saying, you know, this is the apocalypse: it's going to degrade the planet faster than any other technology known to, you know, humanity. And when I looked for data to see if, you know, which side is right — they're just talking in different languages. You know, one's comparing, you know, kilowatt hours. And another one is comparing, you know, hamburgers, and other ones comparing joules. And it's like, come on, can you just give me a consistent, you know, measure so I can see what the difference is between the energy and water usage of AI tasks, right? Like writing an email in ChatGPT, or generating an image or a video with, you know, MidJourney or Sora versus, you know, the kind of things we might compare it to, for example, you know, charging your smartphone or doing a regular Google search or even, you know, having a Zoom conversation.

And it took some doing, and there's a lot of missing data that we'd like to have from the companies. The industry is not transparent about this stuff, but eventually, I sort of compiled a list of sources and what I consider the sort of, you know, definitive, or as close as you can get to definitive, measures of the usage of these different tasks. And that's all available. You can go look at it, again, on a Google Sheet that's public, but that now drives an app that you can get to at what-uses-more.com

You can choose an activity like, you know, I'm going to create a video with AI. I've heard that's really energy intensive. Let's see, right? And then you can compare that to like watching Netflix for an hour, or, let's say, charging a cell phone or using a regular Google search and so forth — scrolling TikTok — these kind of digital activities that we take for granted, that we don't really think about as climate busters. And then, it will generate the sort of water and energy usage in what I think of as kind of, you know, lay person's terms, right? Light bulb minutes — you know, you have a standard light bulb, you let it on for a minute. How many light bulb minutes does it take to generate, you know, a video? Well, it turns out it's in the thousands. How many drops of water, right? That's kind of a stem. We can kind of understand drops of water, maybe better than, like, kiloliters or something. And so you then get a little graph where you can say, like, oh, wow, generating a video takes way more energy than say, I don't know, watching Netflix for an hour. But then you start to tweak things, and you say, oh, well, what if the data center that's generating the video is actually in a cold climate versus a warm climate. What about if I'm using a multi-step prompt, or what if I'm using reasoning models, as opposed to just sort of like, you know, one of the sort of dumber, if you will, sort of AI models. And you get into some surprising results there.

My goal isn't really to sort of definitively say, like, this is the amount of water used by writing an email with AI. It's more to get students particularly, but anyone really, to see how different factors change those results. So you can have the same task that's an AI task and a regular digital task, and the AI task will either be higher or lower energy usage dramatically, depending on what those factors are, and that can lead to changes in your own kind of use of those tools.

So just as one example, one of the standard things I add to an AI prompt now is be concise, because I don't know about you, but I've noticed that chatbots have gotten more verbose over time. They can give you more information. They summarize some bullet points afterward, what they already said. Well, you know, if it's using five times as much text to give you an answer, that's five times as many tokens, that's five times the energy input, you might have just asked for the capital of France, and it could have just said one word, Paris, but it's going on about history or, you know, geography, so little tricks like that, like, you know, "be concise," or knowing that you can type "minus AI" (-ai) in a Google search and it won't give you that Google overview. Those are little tips that we can use individually to reduce our carbon footprint when we do use AI if we choose to.

Bart Everson
Thanks. It was such an eye-opener, that part about where is the data center located? Because, as you may know, I live in Louisiana. Louisiana is very hot, and they're fixing to build a data center in our state, that's supposed to be one of the biggest in the world, with three natural gas-fired power plants just to support it. So it's very concerning. Of course, it's not just an AI center. You know, that's what gets all the attention, artificial intelligence, because that's the shiny new object, as you kind of referenced earlier. But it's also, you know, streaming media and social media and all kinds of other stuff going on in the data center.

Jon Ippolito
That's right. So, if you're in a warm climate, then you know roughly, that your data center is using three times the energy, primarily for cooling, than if you're in a cold climate, right? So in Louisiana, the same ChatGPT query to write a memo or create an image is going to be triple the energy and water usage of a data center in Norway, or potentially in Maine, where I live. On the other hand, it also depends where the energy source is coming from. If you're in, you know, you mentioned that you have, was it a gas powered electrical — three new three new gas-powered plants, right? So those aren't as harmful as coal, right, to the environment. If you were in West Virginia, and you're driving your data center with coal, then you're three times the — let's see, I think it's two — then it's two times the energy usage, very roughly — these numbers are rough, so they're orders of magnitude — than say, let's say, if you're using a data center in California, where it's driven by solar. So those are also like, there's local geography, geographic factors.

Now, one thing you mentioned that was really important to me is that people don't realize how small the amount of data center usage right now is consumed by AI. It's roughly 15% in 2024. We don't know what the amount is in 2025 yet. Those figures are expected to climb. But as you mentioned, you know, crunching the algorithm to show you ads on Instagram or, you know, to track your data for Facebook or whatever — that plus crypto is like the other 80%, and that means that we don't realize that in normal activities like scrolling TikTok, we are actually consuming more of the water and electricity used by data centers than by doing an AI query. And that's something that I think is really important, for us to use AI kind of as a lever, especially as educators like, okay, you came here because we wanted to talk about AI's environmental footprint. Good. But also, let's use that to shine a light on the impact of the other digital tools we do, and then expanding beyond that, you know, once you get to like transportation, like driving a car, flying a plane, or eating meat, you know, there are huge, like, several orders of magnitude more impact for those activities. So again, it becomes a kind of like, you know, lever that you can use to sort of push people to pay attention to things that they otherwise might not have.

Bart Everson
Awesome. Thank you so much. This is also interesting. Let's zoom out, if we could, and just look at the big picture. One thing I like to ask people about in this domain is just, very broadly, what's your kind of take on if higher education should now be taking AI competency as kind of a major learning goal, and if so, then, you know, how quickly should we be moving towards that goal?

Jon Ippolito
That's a great question. So as you can tell from my answer your last question, I'm a big believer in incentives, sort of bait and switch approach, where someone might be interested in one topic but use it to steer them to another. I think that the pedagogy of most universities is really stale, and high school is even worse. It's been that way for decades. Professors and administrators don't want to change. They want to keep giving term papers. They want to keep lecturing. And these approaches have been foiled by students in the past in ways that your professors just couldn't see, right? So if, if you could buy, you know, the answers to a problem set on Chegg, or, you know the term paper on EssayPro, most professors, you know, sort of hope that TurnItIn or similar detectors would work, but they really didn't think about, like, what that meant for their own pedagogy.

But now everybody's got ChatGPT on their browser. Not all choose to use it, but you can, and so it's very instantly available to any, you know, literature teacher who's, you know, typed in "write a five page essay on you know, this Flaubert novel," and sort of in 30 seconds had an A+ paper. It's very obvious to them that they can't keep doing things the way they they did. Now a lot of people are ostriches that put their head in their sand. They say, well, we're going to rely on detectors, you know, we're going to do blue book exams. I don't think either of those approaches make sense. And so instead of thinking of this as, like, AI is transforming pedagogy, I think of it as, like, AI is pointing out problems that were already there. We know there's better ways to teach. We know that project-oriented, individualized learning is better than regurgitation. We know that no one outside of academia writes five paragraph essays anymore. There's, you know, writing has become discursive and dialogic, and it's dispersed through all these kind of different social media and work contexts. There's lots of reasons to validate writing as a form of thinking, but — I'm sorry — the five page essay is just not one of them anymore. So, you know, it's a controversial take, but my feeling is this: the disruption that AI is causing, this tidal wave moving through academia, is now forcing us to deal with issues that we really swept under the rug, and the changes that will happen as a consequence are actually good, even if the technology is, you know, it's, it's sort of like not pleasant to be have AI kind of thrust us against the wall and force us at gunpoint to make those changes.

Bart Everson
All right. Well, thank you so much for saying that. I really appreciate where you're coming from, and I hope that a lot of our listeners will find some value in what you're putting out there. I guess that really wraps up everything I thought we needed to touch on. There's so much more that we could get into. And if there's any final, final thoughts that you had burning a way that you wanted to share with our listeners?

Jon Ippolito
No, I would just point people who want to learn more to many of the resources that are out there. I mentioned the Google Group that you and I belong to. There's a Facebook group Laura Dumin runs, I forget what the specific name is, but let's see if I can find it real quick. Because those are places where people can be, you know, very open to just saying, like, look, I'm a complete newbie in this space, and I'm looking for help. I have this kind of syllabus, you know, what works, what doesn't, okay? It's called "Higher Ed discussions of AI writing and use." There's also, you know, LinkedIn turns out to be a surprisingly, I hate the fact, but it turns out to be a surprisingly helpful network, at least for me, to run ideas by other people, so you're not alone. And I would also say what's really encouraging to me is that smaller institutions, these little universities and liberal arts colleges here and there, have really sort of taken the lead in finding interesting ways to confront this challenge, whether they're being, you know, AI positive or negative, you know, the best ones, I think again, being realists. Accepting that it does have powers that we didn't have before, and also accepting the many risks involved. So you're not alone, and there's lots of places online and hopefully at your own university, where other people are running into those same walls. And if they haven't yet, they probably will.

Bart Everson
All right. Thank you so much. That is a great thought to end on. You're not alone, and we will share those links in the show notes for any listeners who want to follow up for more. Thank you, Jon. Jon Ippolito, our guest on the podcast today — it was great having you.

Jon Ippolito
Thanks, Bart.

About Bart Everson

Creative Generalist in the Center for the Advancement of Teaching and Faculty Development at Xavier University of Louisiana

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.