Making College "Worth It"

Alumni Perceptions of Generative AI

Episode Summary

Hosts Nolan Schultheis and Jessie L. Moore talk with Travis Maynard, Tim Peeples, and Paula Rosinski about their recent research on alumni perceptions of generative AI. Drs. Maynard, Peeples, and Rosinski share what they're learning about the role of AI in professionals' day-to-day lives, across professional, civic, and personal spaces.

Episode Notes

See our full episode notes at https://www.centerforengagedlearning.org/alumni-perceptions-of-generative-ai/

In this episode, Nolan and Jessie talk with Travis Maynard, Tim Peeples, and Paula Rosinski about their recent research on alumni perceptions of generative AI. Drs. Maynard, Peeples, and Rosinski share what they're learning about the role of AI in professionals' day-to-day lives, across professional, civic, and personal spaces. They also highlight what they've learned about "power users" of AI and suggest implications for teaching and practicing effective AI use in higher education.

This episode is co-hosted by Jessie L. Moore, Director of Elon University’s Center for Engaged Learning, and Nolan Schultheis, a second-year student at Elon University, studying Psychology with an interest in law. Jessie L. Moore also edited the episode. Making College “Worth It” is produced by Elon University’s Center for Engaged Learning, an international research center.

Episode art was created by Nolan Schultheis and Jennie Goforth.

Funky Percussions is by Denys Kyshchuk (@audiocoffeemusic) - https://www.audiocoffee.net/. Soft Beat is by ComaStudio.

Episode Transcription

Nolan Schultheis (00:10):

Welcome to Making College Worth It, the show that examines engaged learning activities that increase the value of college experiences.

Jessie L. Moore (00:16):

In each episode, we share research from Elon University's Center for Engaged Learning and our international network of scholars. We explore engaged learning activities that recent college graduates associate with their financial and time commitment to college being worthwhile.

Nolan Schultheis (00:29):

I'm Nolan Schultheis, a second year student at Elon University, studying psychology with an interest in law. I'm the Center for Engaged Learnings podcast producer and a legal profession scholar.

Jessie L. Moore (00:38):

And I'm Jessie Moore, director of Elon's Center for Engaged Learning and a professor of Professional Writing and Rhetoric.

Nolan Schultheis (00:44):

In this episode, we'll explore how college alumni are using generative artificial intelligence, more commonly known as Gen AI in their writing across professional, personal and civic context. We'll talk with three professional writing and rhetoric faculty from Elon University who are studying alumni perceptions of AI. Travis Maynard, Tim Peoples, and Paula Rasinski. Let's meet our guests.

Travis Maynard (01:11):

My name is Travis Maynard. I'm an assistant professor of English and Professional Writing and rhetoric at Elon University, and I've actually just always kind of been interested in alumni writing more broadly just because I feel like it makes me a better instructor and better teacher so that I get a better sense of what is happening in the quote real world. And that allows me to really sort of deliberately design writing assignments so that I'm better preparing students for the actual writing that they will encounter. And so with the sort of rapid proliferation of generative AI as a new writing technology, myself and my colleagues became very interested in really sort of seeing how the writing landscape is changing and shifting in the wake of this new technology.

Paula Rosinski (01:56):

Hi, I'm Paula Rosinski, a professor of Professional Writing Rhetoric in the English department and also writing across the university director at Elon, which is our version of WAC. And it's funny, I was focusing on the, you asked about alumni and not just the writing technologies. I see that as kind of an extension of what I've always been doing anyways, so teaching and studying, writing with computer mediated technologies, expanding the definition of writing to include visuals and audio and video and all those sorts of different modes. And in our major, we have a class called Tech Studio where we teach students to experiment with a wide range of composing technologies and how that alters everything about the rhetorical situation. So slotting AI in there was just complete natural extension. And the alumni part is that if you care about education, I care about how they go out into the world and live their lives. We claim Elon's about preparing responsible ethical citizens, and to do that, they have to be able to engage in discussion and communication. And so we want to know what's happening out there beyond the university because that feeds back into how we prepare them. So that's where the alumni piece comes in for me.

Tim Peeples (03:12):

Hi, I'm Tim Peeples. I'm a professor of humanities, senior associate provost emeritus and senior scholar for the Center for Engaged Learning also. And I was invited by Paula and Travis to join them in this work early on in their process, which was really my main impetus for getting into it. But I have just returned to teaching an active scholarship after 19 years in administration. And I see personally, I've always seen it as a responsibility for me to stay on top of what's changing the landscape of writing and teaching and education. And I always try to approach those things with as much scholarly sensibility as possible. And so this was a great way for me to dive into AI, which I had barely touched when they invited me in. It gave me a way to really get motivated and jump into the work in an active way. I'd also add that writing is a technology, so for us, AI is not another thing. It's not like an add-on, it's just part of the realm of writing. And we always need to be attentive to what that realm looks like because the context makes a huge difference on practices and teaching.

Jessie L. Moore (04:38):

Thank you all three. As we just talked about, you've been studying how alumni are using generative AI in their writing across professional and personal and civic context. We'll ask about your findings in just a moment, but first, could you tell us a little about your study design? How did you approach learning about alumni's use of generative AI?

Paula Rosinski (04:57):

Elon did a writing excellence initiative for SACS accreditation titled Writing Excellence across the University. And that initiative asked faculty and the majors, minors and staff working with students and writing and spaces like student life, student employment to think about what they wanted students to be able to do at the point of graduation with writing and then to work backwards if students were to achieve by the end of their senior year. Those things, their understanding of writing what they could do with writing what they understood writing to be and then scaffold that back across the curriculum in a backwards sort of way. So where could they add interventions to help students be more effective writers at the point of graduation? And it was very successful. I think every major and student life program developed writing outcomes. They had to write at least one, but they could write as many as they wanted.

(05:51):

And writing as a citizen, writing as a profession or discipline and writing to learn, which was really unique writing to just learn and make an understanding of your life and the world around you. And so part of that assessment, we did an alumni survey and we broke it up into two different groups of years and we received close to eight or 900 responses each time, so about 1800 response. And so it was really fascinating and interesting and we realized at that point that we really could learn a lot from alumni and about what we were doing well at the university, how we could do an even better job at the university and also changes that were happening. So that was, we kind of look at this as in a way writing excellence initiative 2.0 assessment with alumni.

Travis Maynard (06:35):

And I think sort of building on the WEI survey, I was coming from Florida State University, my alma mater, where I had done an alumni survey of our writing and rhetoric major. And so in designing that survey and that study, I was pulling on some previous alumni work in the field of rhetoric and writing studies, so really sort of reusing some instruments in that. And I think really the design of the current AI project is sort of a blend of the WEI some instruments that I was pulling from rhetoric and writing studies. And then the three of us sort of developed a set of AI specific questions for alumni that we were really sort curious about trying to understand the context in which they're using AI. We were looking to really understand the ways in which alumni's writing processes are changing with AI to sort think about the kinds of tasks that they're using AI for.

Jessie L. Moore (07:30):

And we can link to some of your previous publications that share those prior studies. For this one, how many responses have you gotten so far? How many alumni have you heard from?

Tim Peeples (07:41):

We sent out this survey to approximately 19,000 alumni between the years 1993 and 2024, which that was one of our decisions actually how far back to go and to see, but we wanted to see how people more senior and mature in their professional lives we're also incorporating this technology into writing. So that was part of our decision. And we've received 276 or so responses, which is a relatively low response rate. It's good numbers for our field. It gives us some descriptive data, not representative data, which was fine for us. And we also felt like because we, unlike the past surveys, which were focused primarily on their writing, which pretty much everyone's going to be involved in writing in some way in their professional lives and civic lives and personal lives, we added this new technology and there are going to be a lot of people who just volunteered out of it because they felt intimidated by it, unable to respond in any helpful. So it was not surprising to us the numbers that we received, but still we got good response, a good number of responses, which gave us some good data, both quantitative and qualitative

Jessie L. Moore (09:09):

And also really in line with numbers for other similar studies of alumni in higher education. I think like many of us alumni are feeling a little surveyed out, so we do see lower response rates, but that is a robust set to be working with and we look forward to hearing more about what you found.

Paula Rosinski (09:28):

I was just going to expand that. We did also get a range of participants who had a range of majors and were working in a wide range of fields, so we can talk more in more detail about that later. But it wasn't just all from one major or all working in one industry. There was quite a bit of variety kind of representative of Elon.

Jessie L. Moore (09:50):

And I think that that's an important addition too for our listeners to understand that although you are asking questions about writing, you are not interviewing only folks who are hired as writers, but people who are using writing across their positions, whatever their industry is, whatever their job title is.

Nolan Schultheis (10:11):

You guys had touched upon this earlier when you mentioned the survey. So now is the chance where you guys can really talk about it and go in depth in what you found with specifically how are alumni using gen AI tools?

Paula Rosinski (10:23):

Participants reported using it primarily for professional and their professional writing lives, which is not surprising because the way we framed the study was how are you using AI in your professional life? And they were also using it for personal reasons and far less for civic purposes.

Travis Maynard (10:41):

And that's in line with previous alumni studies. Oftentimes professional writing and personal writing get very high response rates, but then civic writing is something that is consistently low. So that wasn't surprising to us. And in really trying to examine the ways in which alumni are using generative AI, it seems to be front loaded in terms of what we might think of as the beginning of the writing process, really sort of thinking about brainstorming, developing existing ideas and then going into some of the drafting portion. But our results also show that revision remains a very sort of heavy and consistent part of the writing process. So we found that it doesn't align with the perception maybe within higher ed that students are going to just short circuit the writing process and use generative AI to just create entire texts. It seems that they're still using it to brainstorm, develop ideas and revision to sort of remaining an important part of that process.

(11:41):

And that was helpful for us to really think about. Another interesting aspect in terms of the types of writing tests that we're using, we ended up subdividing our respondents into sort of all respondents. They use AI. That was about 177, but then we subdivided into a group we call power users, and those are ones that are using it several times a week daily or several times a day. And what was really interesting there is that they were still using it at that brainstorming stage, but they were sort of disproportionately using it for what we're calling sort of higher metacognitive tasks or within rhetoric and writing studies, more explicitly rhetorical tasks. And so those three in particular were changing the style or tone of a text, so taking something that's already existing and reshaping it for a different sort of situation. Also writing a range of different types of texts.

(12:32):

So it suggests that our power users are sort of writing in a wide range of genres or they're revising existing writing for new context audiences or genres. So we're really seeing that the power users are being a little bit more deliberately or rhetorical with generative AI, and that could be that they're already strong writers and they see the potential for the tool to do that, or the tool is really freeing up some of their cognitive load in terms of being able to develop and brainstorm quickly. Then being able to do those more sort of elaborate rhetorical tasks.

Jessie L. Moore (13:06):

Do you think that alumni are receiving training in their jobs for their gen AI use or are they more likely to be self-taught?

Tim Peeples (13:14):

We found that the alumni are not receiving training and most of them are learning about this technology through inform networks and informally on their own. So they're asking a friend about something, they're going to instructional sites, maybe a video real quick, just in time kind of learning that's going on, which is one of the gaps that we would identify Also, this is where educators can really step in.

Nolan Schultheis (13:49):

I did just want to say I think it's interesting to hear about all the AI usage, especially for someone my age. I'm a sophomore currently, and I grew up in high school obviously without any of it. I want to say it maybe had some glimpses of being a tool in my senior year of high school, but nobody was really kind of using it the way that we use and interact with it now. And Travis, you had said something about the application of why the AI is being used in the writing and it's more of a structure and an outline, and that was interesting to me because generally how I tend to use it as well, I don't trust it to write for me because obviously I know myself best, but it's just, it's interesting to see how that was kind of my default reaction to using it as opposed to maybe the younger audience that now has it so readily available. Is this going to then cause a shift in the future of how people interact with writing and rhetoric just in general?

Travis Maynard (14:58):

I think history would tell us that any new writing technology is going to change the ways in which we're interacting, we're writing, we're creating, we're composing to a certain degree. And I will say in my classrooms there is the opportunity to Tim's point a second ago, that instructors can develop that sort of critical and ethical AI literacy to, and not even from a framework of like, this is what the tool can't do, it's more of a positive, this is what the tool can do and here's how we can become sort of active participants in a human AI loop to really develop writing that is ultimately rhetorically effective, accomplishing a purpose, connecting with an audience, and really sort of having students be deliberate about those kinds of that kind of thinking, I suppose. And so that's something that I'm seeing in the classroom is you have to get students to pump the breaks and not just copy and paste an assignment sheet into AI, you have to scaffold it and really get them to kind of slow down and actually use it as a thinking and epistemic tool to develop their ideas and then go from there.

Jessie L. Moore (16:08):

Two things I want to amplify in your response, Travis. One is I appreciate the positive framing that it's not necessarily something to avoid. It's already here, so we might as well figure out how to use it ethically and effectively. And we were talking with another guest a couple episodes ago about the ways that we can't assume that people know how to use these tools effectively or extensively that we may have dappled in them a little bit and just played a little, but our students may not know the best ways to write prompts, the best ways to ask the tool for feedback on whatever stage of the process they're in. And so that's something that in higher ed, we can take responsibility for helping students learn how to navigate this tool and use it more effectively. So thank you for sharing that perspective.

Nolan Schultheis (17:01):

Did alumni raise any ethical concerns about using gen AI tools in their writing?

Tim Peeples (17:06):

Yes, they raised a lot of ethical concerns. That was one of the areas that we identify as a key intervention for our field specifically to step in. So I'll step back to our last conversation where educators are well positioned to intervene in this technology to build practices and skills and critical ways of engaging with the technology. And also, Nolan used the word, you don't trust the technology to do writing for you. And part of what we can also build is a trust in what we would call our partnering with a technology. And that's essentially defines the kind of ethical intervention that we are suggesting in our research is that we generate ways that allow us to partner ethically and effectively with this technology, which essentially we frame as a partner in the process. So yeah, we received lots of legal concerns, was a big highlight, people stealing ideas, copyright infringement data that we also received privacy concerns that data would be used inappropriately and these are all reasonable, the kinds of concerns people should be raising probably as they do this. And so yeah, we received a lot of those and we received a lot in education also, which I think Travis can speak to a little bit more.

Travis Maynard (18:48):

In addition to the concerns that Tim mentioned, we did have actually our most sort of frequent response in terms of industry was education training in library, which makes sense given the kind of pressure that we're feeling in education in terms of how to respond to this technology. And so in our qualitative data, we had a sort of range of responses that I think is pretty characteristic and typical of the different types of approaches that we're seeing in terms of responses to AI. And so one of them is the sort of the impulse or notion to police and surveil really sort of relying on AI detection tools or even stepping back from writing assignments until educators can figure out how to work around the technology to a certain degree or something that's a little bit more active in terms of, okay, here's this tool, it's here, the cat is out of the bag as it were.

(19:44):

How can I begin to reframe my writing assignments to incorporate it into the writing process? And then we had an individual working in K-12 really at the sort of tween 12 and 13-year-old age range. And they were cautiously optimistic I think in terms of wanting to fold in the tool into that age of students' writing process, but was still sort of expressing concern for a loss of identity or voice, like you said Nolan, you know yourself. And so wondering if and how the tool might shape or reshape the student identities at that age. So there was a range of what we're seeing there.

Paula Rosinski (20:26):

There was one other thread in the qualitative data and it wasn't overwhelming, but we did get several responses using phrases like ghoulish and solace plagiarism and that AI had no role in professional or creative writing and it would destroy those fields, which is really kind of perfect because we've seen that in the pattern with previous technologies that it would lead to the destruction of writing and it will become soulless and destructive of creative pursuits and professional ways of writing. And often the exact people who were making these claims would then be using it themselves. So it's this weird kind of tension where we do get this thread that we've seen before and yet they're using it themselves.

Jessie L. Moore (21:11):

And that's actually an interesting segue to our next question for you is I did get a chance to sit in on one of your recent presentations and there you talked about a shift in generative AI discussions in 2024, moving away from a focus on gen AI's, impact on labor to a consideration of gen AI for augmentation versus augmentation. Could you briefly share what you mean by that? Automation augmentation delineation and where alumni use of gen AI seems to land?

Paula Rosinski (21:45):

Early on there was a lot of talk in the AI and labor scholarship about efficiency and how everybody would become more efficient and people would be more efficient and jobs would become more efficient. And then it started to shift and you get more of this language about augmentation versus automation, which is more sophisticated way of looking at it. And so the concern there would became trying to figure out which jobs or tasks would be more likely to be augmented versus automated, and not surprisingly, jobs that are more repetitive or more likely to be automated. But then there was a lot of both kind of projecting forecasting, which might be automated, but also going into some discussions in online places about employment and how people were really using AI to augment their writing. And it really came out that was being used a lot in the labor force and computer software development and in technical writing that there was some automation happening, but even more augmentation. And so with the idea that it's not replacing, but it's just altering and augmenting our writing processes and professional writing spaces,

Travis Maynard (22:57):

And we have quantitative data that suggests that sort of augmentation of writing as opposed to automation. There was specifically a set of questions that we asked alumni asking their perceptions of how AI has either enhanced or diminished the quality of their writing and then whether the tool has simplified or complicated their writing process. And so overwhelmingly generative AI has simplified writing processes, however, the majority of our responses in terms of enhancing or diminishing is only slightly enhanced. While the process is becoming slightly more efficient, it's not to the point of it's completely replacing writing. There's still that sort of aspect of revision. Like I mentioned before, we have this sort of quicker writing process. Writing is a little bit better because of the tool and really we're seeing a moderate amount or a lot of revision in terms of AI output. And so we're seeing that sense of yes, it's making writing better but not by much, and it's at a space where humans are still an active part of the writing process.

Nolan Schultheis (24:10):

What advice about using generative AI in writing would you give to students who are listening to the podcast?

Paula Rosinski (24:16):

I'm curious what my colleagues will say. We may have different ways of approaching this, but I would say use it experiment, right? Get some free accounts, maybe invest in one paid one and play around with it for personal uses, for professional uses and if acceptable in your classes, according to your professor in academic spaces because you can't just read about it and you can't just talk about, you have to experiment with it. We say a lot that we're in this weird moment, things are evolving. We're trying to develop writing processes and writing pedagogies with AI as they're evolving. And so students should play a role in that and be part of developing those new strategies. So don't be afraid of it, experiment with it appropriately. And also there's a lot of great resources to help you start get familiar with the basics. They may not be rhetorical, which we're interested especially in pursuing, but to just get involved,

Travis Maynard (25:18):

I would double down on Paula's advice, but shift the audience to those in higher ed who are scared or skeptical or feeling very doomsday ish about generative AI and what it may or may not do. And again, coming at it from that sort of positive framing of don't interact with the tool to try and stump it or see what it can't do. Think about ways in which it can sort of amplify or augment writing processes, developing lesson plans. There's a lot of professional sort of aspects of those in higher ed that this can augment and help. And in terms of students specifically, I would think always check with your faculty members first before you do anything. Sort of see where, when and how in the writing process you would be able to use generative AI. But it's really just the most important thing is to dive in and see what you can do. Personally, I don't have an artistic bone in my body, but I have these wild visions in my head that generative AI can help me get out onto the screen as it were. And so I see the comments of thinking about how this is going to sort of kill creativity, but there's so much really bizarre, interesting, fascinating AI art that's really emerging. And so dive in and play around ultimately.

Jessie L. Moore (26:42):

Travis, a quick follow up. What are your favorite tools for that artistic exploration?

Travis Maynard (26:49):

So I have a series of them that I've only used a little bit because I'm not at the point of paying for all of these tools, but ChatGPT freemium Premium and using the sort of DALL-E image generation within that has been the primary tool I've used. But there's also video generative AI called Runway. For Voice and Audio, there's a platform called 11 Labs. So there is really a sort of wide range of tools that A, are very expensive. So I don't use them very often, but ChatGPT is one that I definitely rely on the most often

Tim Peeples (27:27):

And I would of course I'm going to double down on what they're saying that you get a practice, I'll put this in a frame it a little differently. I'll go back to Nolan's comment about trusting AI and connect connected to our conception of AI as a partner or collaborator or interlocutor in the writing process. You need to build a relationship with this technology, which some people are going to find unsettling for me to say it that way. But if you don't have a relationship with a peer who's reading and offering feedback to your writing, you're going to feel very uncomfortable working with that peer. The more you work with people and you have them interacting in your writing process, the more comfortable you get, not in general with that kind of interaction, but specifically with certain relationships. And AI is a kind of tool that you need to build a relationship with and figure out how it works and how it fits into your own writing process.

(28:33):

So I would frame it that way that it's a matter of just figuring out how it works. Now, all of this, I'm going to go back to something we talked about earlier. Paula had mentioned automation is happening more with routinized routine kind of activities. And also I would say with mundane activities, ones you don't find very meaningful. Interacting with gen AI from a point of meaningful writing is critical that you have to approach the writing that you're doing as a meaningful opportunity. If you don't, then you probably will automate the writing through Gen AI and it won't be meaningful and you'll just be turning in something that basically somebody else wrote. And that's what we're trying to, I would assume most of the people that are listening to this podcast are going to be people who are engaged in and writing in a meaningful way. And AI can be part of that process in a very, as long as you're approaching it from that kind of meaningful way, and then it will augment your writing process rather than automate it because it becomes a meaningful partner in your thinking and writing process,

Paula Rosinski (29:50):

Which was always the goal in academia is to develop meaningful writing opportunities for students. So it's new, yet it's not. It's an extension of what we've always been trying to do and to, I mean, I would just add on also that that's why it's important for students to experiment with different AIs and come to kind of a functional understanding of it so that they can reach the critical and the rhetorical. So it's just kind awkward. Some people don't want to jump in, but you can't really be a member of those conversations and build a relationship like Tim's talking about for your own writing to use AI as a collaborator if you don't have some functional literacy to get you to the critical and rhetorical.

Travis Maynard (30:31):

I actually have a follow up for Paula because it's a trope that I've heard a few times and I would just like to hear some elaboration on it. Yes, build a relationship but also don't humanize it. Is that still the conventional wisdom? Paula, can you tell us more?

Paula Rosinski (30:48):

I mean, it's built to trick us into thinking it's a person the way they're all a bit different, but the interface is software has always been built to trick us into thinking that it's naturalized and we can't intervene and we can't critique, and its affordances and limitations the way the cursory, you'll ask a question, it'll just sit there like it's thinking and then put out some words and then hold off and then give some other output. So I mean, that's why I think we've tried it in our own classes to give students a grounding in how AIs are built and how they came to be. How are we here, where might they be going? The math behind it to understand that it's not a person and that there are limitations. And so maybe it is a bit of both. So it is not a person. We do have to be careful about not giving it human characteristics, but build just like we use Word. We don't think that's a human, but we do build a relationship with how we can use it responsibly and augment our writing. So again, a lot of what we said before about these previous technologies, not to say that AI isn't more transformative and more powerful, but we can rely on some of those strategies. Did that answer your question, Travis?

Travis Maynard (32:03):

I think so.

Paula Rosinski (32:04):

Did you ask what's your concern or,

Travis Maynard (32:08):

Well, it seems like there's a tension there of how do you build a relationship while insisting that it's not a person, and I know it's not advocating for that, but it makes me think of Data in Star Trek, the next generation, right? Not a person, definitely a thing, but you want to humanize it.

Jessie L. Moore (32:27):

And it's interesting, Clippy we never would've thought of as humanized, but then some of the newer gen AIs that they are, as Paula and Travis are both alluding to, they're much more humanized. And even my gen AI training coach has a name that talks me like a person. So that is an interesting reminder that even as we think about gen AI as an actor in relationship rich education, that we also have to recognize the limitations of its identity and who's contributing to that identity as well.

Tim Peeples (33:06):

I'll add in here that yes, I agree with all of what's been said, however, there is also an element of trying to interact with it as if it is a human because we are actually finding that the prompts that you write the less machine oriented they are. I mean, if you're just asking it to pump out information and you are prompting it like a machine, it will give you machine-like information back. And if you engage with it more like a relationship, I will say in this context, I'll say more like a human relationship, though I wouldn't think of it that way. I think of it as a machine relationship. But the more you interact with it that way, almost the kind of interpersonal communication cues that we have that we generate between one another, also it can generate better information back to you too. So it's a fine line. You have to balance that fine line for sure.

Paula Rosinski (34:17):

So really a new evolved way of interacting with, I mean, we wouldn't think about building that kind of relationship with Word or an Adobe product, but the way of interacting with it does invite you and you can lead to more effective and rhetorical ways of writing by approaching it in a humanized sort of way. But knowing that it's not the way it's built invites us to interact with it in different ways than previous technologies.

Jessie L. Moore (34:49):

I saw something just a couple of days ago where they were saying, if you used please and thank you in prompts, if that had a difference, and it seemed a little bit inconclusive, but they did ask then I think it might've been ChatGPT... if ChatGPT would respond differently with please and thank yous. And the response was, not necessarily, but I appreciate it. So for what it's worth, we can build in those markers as well.

Paula Rosinski (35:17):

I think just last night or the night before, Sam Altman was saying they're always tinkering with the algorithm in ChatGPT four oh has become too simpering and they know people don't like that, so they're going to be working on that and they'd get back to us on the results of how they adjusted it. So that is something else going on there that we have people who are continuing to adjust the algorithm so that it does respond differently to different sorts of prompts and niceties and that we would use in a human conversation.

Jessie L. Moore (35:48):

And you have already kind of taken us into this last question in some of your responses, but what recommendations would you share with universities and their faculty and staff as they support students' development of strategies for using generative AI in their writing?

Tim Peeples (36:04):

I'll jump in because it matches up with one of the last things I was talking about, about routine. The things that people are most fearful of AI automating and taking over is routine mundane work, and that includes writing work. So what I would suggest to higher education, which we've been suggesting for years, is if you are assigning writing it meaningful, make it a meaningful assignment, try to identify real audiences, real contexts. Try to be a faculty member who actually looks forward to reading whatever it is that you've actually assigned. The more routine, the more mundane, the more replicable the activity, the more inauthentic and automating it seems to be. And you basically are just turning students or whoever you're working with into non-human beings in a sense. And AI makes that very obvious to us in the way that it now can work in this writing process.

Paula Rosinski (37:25):

To add onto that now is not the time to limit in education and writing and rhetoric, it's even more important than ever. It's always been important, but just because we have this technology that can create output doesn't mean we don't need training and experience and education and experiences and writing and rhetoric. It's even more important because the processes and strategies are changing with really important implications about privacy, ethics, responsibility, being responsible participants in our professional civic conversations. So the stakes are even higher

Travis Maynard (38:04):

To bridge Paula and Tim's comments, studying rhetoric and rhetorical thinking has always been important, but now potentially even more important within higher ed as we think about this technology that can generate slop in terms of texts, things that are uninspiring. However, part of what we're suggesting as a way forward is developing and teaching prompt engineering and doing so from a rhetorical perspective in as generative AI is not human. It still acts like one. It is a rhetorical situation in and of itself. And so really helping to develop students' rhetorical thinking for completing an assignment at the meta level, metacognitive level. But then at the level of cognition, you're still thinking about generative AI as an audience of sorts, a non-human audience, but an audience nonetheless. And so being able to prompt effectively to get the results you want, but then you can reshape and revise to fit within whatever rhetorical situation an assignment is prompting for is something that we really need to double down, triple down on.

Tim Peeples (39:19):

I'll add another piece to this. We've gotten in several presentations we've done on this questions about how AI people are fearful that AI will remove the struggle of learning for students. And one of the things that I would highlight and that we've highlighted in many other contexts is we need to seriously ask ourselves what is it that we want students to struggle with? And then make sure that we create the context where they are doing purposeful struggle. Learning is a struggle. You need this struggle through the learning process, and we need to make sure that we're creating educational context, and I mean that from the broadest level to the classroom, very specific level. We need to create context where we are highlighting the struggles that are important, helping students work through those and showing how those are valuable. At the same time, AI, like any technology, it removes some kinds of struggle from our lives.

(40:35):

My dishwasher washes my dishes for me. I don't have to do 'em by hand. That's removing of a struggle. My wash machine washes my clothes, it removes that struggle. My car gets me from a place to another place way faster than me walking or riding a bike. That removes a struggle. What does that allow me to do? What other struggle then does it afford me to engage in? What are the affordances that AI offer to allow us to struggle in new spaces, in new ways? And then how are we creating new contexts that allow us to really work at those new spaces that it opens up for us? And those are some real educational challenges for us. AI makes us rethink what's really uniquely human and what's not uniquely human, and it puts us in a new space of thinking about struggle, learning and work and engagement with one another.

Paula Rosinski (41:38):

I like that, Tim, that AI makes us rethink all those considerations and also maybe makes us rethink what writing is and how we use it, how best it's taught, and the ethical, social, cultural implications, just like previous technologies makes us rethink it. But even in a deeper, more expansive sort of way.

Jessie L. Moore (41:59):

I also appreciate the through lines that I'm hearing with some of our previous interviews this season that a lot of you who are thinking really strategically and systematically about AI and education, you are highlighting that it's coming back to us as educators. What is it that we want students to learn? What is the purpose of the activities that we assign? And there may be a role for Gen AI in those activities, but then we also need to be transparent with students about what our goals are, what they can do with Gen AI as they're working towards those goals and where they might need to embrace the struggle a bit and embrace the learning opportunity.

Nolan Schultheis (42:39):

Thank you all three of you for coming in and having this conversation. I think AI is pretty interesting and obviously with a technology like this being such a new thing, it is cool to just watch it grow.

Jessie L. Moore (42:53):

Let me echo that. Thank you to all three of you. We appreciate you taking time to visit with us and just appreciate the work you're doing as well. Thanks.

Tim Peeples (43:01):

Thanks for inviting us. Good to meet you, Nolan. Thank you all very much. Nice to meet you all.

Jessie L. Moore (43:08):

So Nolan, what stood out to you in our conversation today that you think students should think about?

Nolan Schultheis (43:14):

I really liked the idea of a mutual relationship. I'm blanking on the word. I'm trying to think of the nature term for it. A symbiotic relationship. I like how it seems you can't really fully rely on it, but you also have to work with it. I think the example given in our questions was augmentation versus automation. I think that's a very interesting point because the way I interact with AI is more of augmentation as well. I don't really use it for anything really absolute. And I think that that's something important that people need to remember and keep in the back of their head when they're interacting with AI is that as much as it seems like a magical tool, it still ultimately is not as absolute as whatever you could come up with within your own mind in terms of a writing context.

Jessie L. Moore (44:17):

Absolutely, and I appreciate the reminder to faculty and staff that to really move towards that augmentation, we need to be giving meaningful assignments and we need to be teaching students how to use generative AI in ways that do augment their own abilities, augment their own processes so that they can engage ethically and effectively in gen AI use because it's here. And so we're going to be living with it, working with it. It is another, as professor people said, actor in our relationship rich education. So we do need to give students a chance to practice with it. We need to practice with it and then think about how we're engaging students with meaningful tasks that they can really see it as another tool in their toolkit. I also appreciated the reminder that we need to decide what we want students to work on, what our goals are with their projects and where it might be appropriate for students to struggle a little bit as they're trying to learn something new versus where we might use Gen AI to take some of the struggle away if we really have different learning goals than what the immediate struggle point might be.

(45:32):

So again, that thinking about what's meaningful, what we're focusing on, what our goals are, and how gen AI might be used to augment the activity in that context. There's a fun conversation. So I appreciate our guests' time. Once again, I'm Jessie Moore.

Nolan Schultheis (45:54):

And I'm Nolan Schultheis. Thank you for joining us for Making College Worth It from Elon University's Center for Engaged Learning.

Jessie L. Moore (46:00):

To learn more about artificial intelligence and engaged learning, see our show notes and other resources at www.CenterForEngagedLearning.org. Subscribe to our show wherever you listen to podcasts for more strategies on making college worth it.