Making College "Worth It"

Navigating Generative AI in Higher Education

Episode Summary

In this episode, Nolan and Jessie explore how generative AI is influencing current writing pedagogies. We speak with Dr. Mandy Olejnik, Assistant Director of Writing Across the Curriculum at Miami University of Ohio. Our conversation focuses on the impact GenAI has had on the assignment creation process.

Episode Notes

See our extended episode notes at https://www.centerforengagedlearning.org/navigating-generative-ai-in-higher-education/

In this episode, Nolan and Jessie speak with Dr. Mandy Olejnik, the Assistant Director of Writing Across the Curriculum at Miami University of Ohio. Dr. Olejnik has been monitoring generative AI and its use for the instruction and practice of writing in higher education classrooms. Listen in to learn more about how AI is affecting writing in higher education! 

This episode is co-hosted by Jessie L. Moore, Director of Elon University’s Center for Engaged Learning, and Nolan Schultheis, a third-year student at Elon University, studying Psychology with an interest in law. Nolan Schultheis also edited the episode. Making College “Worth It” is produced by Elon University’s Center for Engaged Learning.

Episode art was created by Nolan Schultheis and Jennie Goforth.

Funky Percussions is by Denys Kyshchuk (@audiocoffeemusic) – https://www.audiocoffee.net/. Soft Beat is by ComaStudio.

 

Episode Transcription

Nolan Schultheis (00:07):

Welcome to Making College Worth It, the show that examines engaged learning activities that increase the value of college experiences.

Jessie L. Moore (00:14):

In each episode, we share research from Elon University's Center for Engaged Learning and our international network of scholars. We explore engaged learning activities that recent college graduates associate with their financial and time commitment to college being worthwhile.

Nolan Schultheis (00:29):

I'm Nolan Schultheis, a third year student at Elon University, studying psychology with an interest in law. I'm the Center for Engaged Learning's Podcast producer and a legal profession scholar.

Jessie L. Moore (00:38):

And I'm Jessie Moore, director of Elon's Center for Engaged Learning and a Professor of Professional Writing and Rhetoric.

Nolan Schultheis (00:44):

In this episode, we'll explore strategies for teaching with generative artificial intelligence, more commonly known as Gen ai to support student learning. We'll talk with Mandy Olejnik, the assistant director of Writing across the curriculum at Miami University's Howe Center for Writing Excellence in Ohio, where she supports disciplinary faculty and graduate instructors in their teaching of writing. She also leads all AI faculty development programming at the center, including the AI informed Writing Pedagogy certificate. Let's meet our guest.

Mandy Olejnik (01:21):

Hi everybody. My name is Mandy Olejnik. I use she her pronouns. I'm assistant director of writing across the curriculum at Miami University in Ohio, not Florida. I know there's two Miami's. Everyone gets a little confused. I work at the Howe Center for Writing Excellence where we are fortunate to be able to have a fully fledged writing across the curriculum program and a writing center that works with students. Whereas in the WAC program, we work a lot with faculty and instructors and teaching writing. I've been studying and thinking about AI since early 2023 when like the rest of the higher education realm. We were suddenly dealt with all these questions around AI as generative ai like ChatGBT suddenly became more widespread than it was before. In my context, working in an office where I work with faculty all the time, they were having a lot of questions.

(02:07):

They were saying, what's this new ChatGBT thing? How does this impact the writing that we do? And so I decided to learn more about it and it started at first more exploratory. I spent a lot of time playing with these tools. What is this? How does this relate to other tools that we've seen come through, like the computer and spell check and things like that. But the more I thought about it, I became more interested in this idea of how did the existence of these tools, our uptake of them, change how we actually write, how we learn how to write? And that's always been the kinds of questions that I care about as a person who works within writing studies. Before we had ai, it was more of how do we teach writing in the classroom and what are the kinds of thinking that we have to encourage students to do? And now it might be how do we encourage students to still do that thinking on their own? What might these tools allow? What might they challenge and how do we work with these tools to still at the core of it, get to what do we want students to be able to learn and do and how do we create assignments and course context to help them do that? So that's the realm that I've been in and the questions that I've been asking a lot lately,

Jessie L. Moore (03:06):

And you've touched on this a little bit in your introduction, but I know you've led efforts at Miami of Ohio and in the Ohio College teaching consortiums to support teaching with AI to promote student learning. And I'm wondering what are some of the key principles or concepts that guide how you approach teaching with ai?

Mandy Olejnik (03:27):

That's a great question, and I think the fact that you mentioned principles is really important because that's actually the core of everything that we do at the Howe Center. What I've noticed in all the responses to AI that we've seen in our field across higher ed is this big focus on reactions. Oh no, these tools are here. What do we do about them? We're going to have to change everything. How do we get students to not cheat? And instead, I like to approach it with what are our core principles about what we know about teaching, about learning and what we want students to be able to do in our field. And so we've been trying to go for, I don't want to say more abstract, but I think a more longer, bigger view of before we start talking about what AI is and how it might impact your pedagogy, let's talk about what you want your students to do and let's talk about then what these tools are.

(04:12):

Let's actually learn more about them. Let's play with them. Or if you don't want to play with them, I respect everybody's agency to opt out of these tools. Here's some examples. What does that make you think of? How does it implicate your assignments? And then we go through and think about more of the granular level. How does this relate to your Bio 1 0 2 class or your theater 3 28? And what we've been doing a lot through our work at the how is really trying to help people become AI informed. So whether you like it, whether you don't like it, you should at least be informed by it because it's becoming an area that is really cutting across all of our disciplines and it's really becoming elemental in the terms of how we operate, how students are going to be operating. Just as we might say that writing is elemental, it's infused into everything.

(04:56):

I would argue that AI is starting to become that as well. And we're seeing pressures from a lot of different places from the government, from our own state institutions, even our own institutions might be pushing us to go in a certain way. So it can be helpful to know what's actually going on, but also take stock of how you feel about it. I think there's so much affect wrapped up in AI too. There's the affect of how this is going to implode my courses, how this is going to completely eliminate my discipline. It's really interesting. We have a team right now in our faculty fellows program that's from computers and information technology. Their entire program is about teaching students how to code. And they're like, well, AI is here. You can plug in something, it can do it. That how to skill-based learning is not entirely there.

(05:40):

What is it that we're actually trying to teach? Then? What are we actually trying to do? And so I see, I think a lot of disciplines are going to be impacted, some in ways that are more visible than others, but I think the point is trying to help us think through what this means, how to get ahead of it, how to be more proactive and not just keep on reacting. What happens if you say, this is what I value, this is how I feel, here's how I'm going to be trying to address this, trying to make writing in my field, whatever. My field is more at the forefront of that human engaged learning and thinking.

Jessie L. Moore (06:09):

I really appreciate the agency that gives the faculty that you're working with. I was just in a conversation this morning with some folks who were noting that they wish that there was more space for faculty to also opt out, but to have a little bit of guidance on the how and why they are opting out and how to communicate that to students. And at the same time to hear threads and your response to what we've heard from a few prior guests about also thinking about what we really want students to learn and be able to know and do and how AI might be a facilitator to that. That one of our guests said something about if we think about where do we want the messiness to be, where do we want the messiness of learning to happen and are there ways that AI might take away some of the things that are incidental to that so that we can focus on that learning moment. So really fun to hear you share your philosophy and the principles that are guiding your work.

Nolan Schultheis (07:11):

So I was really focused on what you had said about focus on what the students are doing and then look at the tools. From my perspective, obviously having grown up without having AI in the classroom nearly as much, I think it kind of fully came around when I was maybe a junior in high school, senior in high school, so I didn't have as much applied use with it in lower education than kids are now. I think it's really interesting though this point about the what, because the way I use AI is I'm using it as a tool, right? Because I didn't have it, so I'm not going to use it to do all the work for me. I use it the way it's meant to be used. I'm thinking that the way AI has kind of meandered its way into education, the substance and what teachers are looking for is going to have to change no matter what point blank, because the way you used to test people's knowledge in the past is not really as effective anymore simply because of the fact that this tool can just circumvent so much of a thought process.

(08:18):

You look at writing specifically in your context. I mean, a plan in an outline is one of the very foundations of writing, and it helps you kind of get the gears going even about what you want to write, but if you have a machine do that for you, all of that kind of applied and background thinking that we don't really actually register in our heads is missing and therefore is changing. Honestly, a vast majority of curriculum in the future is what I would argue.

Mandy Olejnik (08:48):

You're absolutely right, Nolan, and I think what you identified there of what are you actually asking students to do? If you're asking people to do recall, write this fact make something AI can do that it's gotten better and better. It can now connect to the internet. At first we're like, oh, it can't actually search the internet. We're safe. And then it turns out actually it can, oh, we can't cite sources. It can do that too, depending on the model that you have, the program, if you have a paid version or not. And I think what we're seeing is this real deep dive where we really have to pay attention to what our assignments are asking students to do. Those kinds of things like memory recall, getting a right answer. If that's more easily outsourced to something like ai, where's the real fun and exciting stuff to go back to Jesse's point, where's the messiness and learning?

(09:32):

Where's the focus on process to say, when we're trying to create meaning, when we're trying to communicate with other people, sometimes we just have to wait around in it for a while. We have to draft out five different ideas, see which one we like, test it out on somebody, get somebody else's opinion because writing is also a social thing that we build together. I see a lot of faculty really thinking through those questions of, well, what am I actually trying to get my students to do if you're preparing them for a certain field or a certain job that they're going to have one day, I love to tell people, you can't take chat GBT with you to a job interview, right? I can't say, hold on, Nolan and Jesse, let me just consult with chat GBT, and then I'm going to read from it what it wants me to tell you.

(10:12):

I have to be able to have that live recall and when we have still face-to-face work context, those types of skills are still really important. And it can be helpful to articulate that both for yourself as an instructor when you're trying to figure out what you're going to be assigning, but also to students to help them know, here's why we're going to be learning this. Here's why it's really important. And I see the conversation of AI just really bringing those questions to bear that maybe it wasn't as urgent three years ago before AI was here, but now we really have to look at it and consider what are we doing and how is it going to be helping students and benefiting them in the future.

Nolan Schultheis (10:45):

In your Inside Higher Education piece, you note that writing is social and rhetorical, that writers benefit from talking about their writing and sharing drafts with other writers. How might generative AI be part of that social process and do you think generative AI primarily functions as a yes man that might be detrimental to the writing process or can it still give constructive feedback?

Mandy Olejnik (11:08):

So to think about this, if we want to understand artificial intelligence as a tool, I do think that it could have one place in a broader network of support for your writing. So if we think about it as a tool or a resource, typically in a writing classroom, we get feedback from a professor, we get feedback from peers from each other. You could add in ChatGBT, Google Gemini in there as a third layer. Maybe it is 11 o'clock at night, you're working on your paper, the writing center's closed. I hope your faculty members are not answering emails at that time at night. You want to run an idea by someone. What if you asked Gemini, here's what I'm thinking for my paper right here is this idea. I'm not sure if it works. Can you help me figure out the right tone to blend?

(11:50):

And then maybe the next day you go into the writing center and you talk about it and then you get feedback from your professor in a system like that where you have different places to get feedback. And also I think it's the way that you prompt these tools. So if you're not just asking it to write it for you, if you're not focused on the product, but it's more of the process and if you're thinking about writing in these really careful ways, what is my tone? What is my evidence? What is my sentence structure? How can I improve this? I think it can produce some helpful feedback, but I don't think that should be a replacement for any of the human-centered practices that we've known for decades in our field or actually really helpful for learning and for writing. All that said, I think it should be an option.

(12:32):

Personally, my principles would tell me, I don't think we should require anybody to use these tools. I think there are a lot of important environmental ethical issues that we have to keep in mind. You shouldn't feel like you're forced to be complicit in that system, so it should be an option. It should say consult this if you want. If you do, here are some guidelines. If you don't want to, that's fine. Talk to each other. Talk to me. We've known for a long time how to write without the assistance of these AI tools. The question about if AI is more of a yes man is really interesting when you break it down and think about, Hey, how AI works, it is not a person. It is just predicting patterns of what should come next. It is trying to get a sense from your prompts what you want to hear, and that way I do think that it can be a yes, man.

(13:16):

I've actually practiced with it because part of what I try to do is know how these tools work and I test various things. I've tested it and I've tried to have it help me do things that are incorrect and it'll eventually get there. It'll say, that's an interesting approach. While it's not the traditional way to do X, y, z, sure, let's do this. And you can actually kind of trick it into doing what you want, which you're thinking about just the overall assessment of it. Is it accurate? Is it biased? There are all these nuances that we have to consider, but I do think so much of how what you get from these tools is how you prompt it. And I think if you were clear and you say, I really want to get a sense of how other people have done this, what is typical in a field?

(13:58):

For example, does this match? Is this different? How do I do that? You can prompt it, but I do think since it is from a company who is trying to embed these tools as becoming assistance to make your life easier, there is that sort of customer service feel. I think it's gotten better, but at first it was so upfront chat, GBT was just like, how can I help you today? Want me to help you get that writing then? And it felt very customer servicey. I think it's toned down a little bit in the fifth version at this point, but it's still there and we have to really tread carefully in how we use those.

Jessie L. Moore (14:30):

I appreciate you calling attention to the intent behind the programs that they're not neutral programs. And so thinking about how we prompt them, the output that they provide, and then the critical thinking of also assessing what it gives us. And so there's definitely a component there both on the front end and the results end of we need to have a sense of what we're looking for, what our goal is so that we can prompt it effectively and then assess what it gives us as well.

Nolan Schultheis (15:04):

So you touched on this a little bit, but I liked how you mentioned in the Inside Higher Ed piece that in your first year writing courses, it's not so much about the material as it is the thinking process that's important. And do you think that a coalescence of both brain and computer thinking is the direction brainstorming is heading?

Mandy Olejnik (15:23):

That is an intriguing question because when you think about what brainstorming is, I think it depends on how you're defining that. So are you thinking about brainstorming as just a means to the end of the final product, or are you actually valuing the process of what does it mean to stop, take in, reflect, think about who's my audience, what is my context? What argument am I trying to make? What approaches and strategies do I have? I think there's value in sitting in that messy place of actually figuring out what is in front of me. And I also think that's a skill that we face every day all the time. Even as a homeowner I'm getting worked on in my house right now, I have to think through what do I want the end result to be, but also what do I feel about the house?

(16:07):

What color do I actually want it? What kind of paint do I actually want? Seemingly mundane things. So I think there is value in me being able to connect with myself, and it's not just, well, these three colors look great, I'm going with it. It's still a broader system of being able to have agency in that. And for me personally, I want my students to be able to have that kind of agency and really connect with and think through, here's how you approach problems, how we're going to approach a problem. And sure, these tools might be a way to help with that, but I again, wouldn't want it to replace that. And I think there's a difference between I'm going to use this tool to help me think it is alongside me. I'm asking questions, it's helping me refine them, but I'm still experiencing that.

(16:47):

And then there's the other of, well, this helped me get my paper topic much quicker and now I can move on and do something else. I don't think that the latter is as productive. And so it's all about balance, and I think we're still figuring this out. This is kind of the exciting part about this work is we're now in a place where these tools are more readily available. So what does writing pedagogy look like when it's assisted with gen ai? How do we teach that intentionally with the prompting, but also still retaining the core values that we have in thinking and writing as being a means of learning, not just as a thing that you do to check off a box. I think those are the big questions that we're still thinking through as a field.

Nolan Schultheis (17:23):

So often you see the jokes online around AI and the, oh, just let me ask AI that, and it's like a super simple question. What you're saying in this is kind of leading me to believe that that might be more towards the future, a more common thing. You had said that the human process is valuable. Do you think that AI is going to completely destroy hypothetical and rhetorical thinking because people will just turn it to go through the process for them to get to the end result?

Mandy Olejnik (17:52):

I think if we're not careful, AI can definitely complicate that process. I don't think we're there yet. I know I don't think AI is going to take over right now, but I do think if we're not really intentional with how, not only how and what we're teaching, but also the why this is still important. I mentioned before, we need to be able to utilize these skills in non-product focused settings. Right now our product is going to be a podcast, but the content are the things that I'm saying. The process is us talking listening to each other. I see no one's writing some notes thinking about things to say We need that still. And I don't think AI could completely do that. I know there are programs like I'm sure Notebook LM can have a great podcast episode, but it's not going to be the same.

(18:36):

It's not going to be as responsive. It's not going to be as authentic. I think authenticity is another really interesting concept because computer programs just don't have that. They're not human. We are humans. We can talk to other humans. We recognize what feels human, what feels genuine, and I just think personally we're always going to need that, always going to have that. I would hope that that doesn't lessen the quality of what we're doing in the future, but I could see if assignments and activities aren't scaffolded appropriately, there might be some skips that can be, or some steps that can be skipped that might interfere with that, but I think we're always going to need it. I'm personally not worried that we're going to be out of jobs. I think writing is still going to be really core and we still need humans to help us guide through these questions.

Jessie L. Moore (19:19):

And I went to interject very briefly that Nolan has referenced your Inside Higher Education article now twice. We will link to that in the show notes so that others can read it. It's just a nice thought provoking piece that also responds to some of the concerns that others have raised and offers some of this, well, this will sound biased, but levelheaded thinking that you're offering now of that we still need human interaction and human agency and the authenticity. So we will link to that for others who are interested. And you've touched on this a little bit, but I wondered if you could share what recommendations would you give universities and their faculty and staff as they think about supporting students' development of strategies for using generative AI in their writing?

Mandy Olejnik (20:12):

So one thing that immediately comes to mind is I think it's helpful for all institutions to try to get a sense of what's happening with AI through institution. Like take a temperature check. I've actually been involved with our provost office here at Miami to conduct student and faculty surveys to try to get a sense of are students using these tools? Which ones? How are they using, what are their feelings about it? The same with faculty in the sense that we're getting is that there's a lot of nuance. It can be a little polarizing. There are some people who love it, some people who really hate it and are saying very passionate things about how it's like the destruction of mankind. So our students and our faculty are going to be at different places and I think it helps to have a sense of where they are and also why they might be resistant.

(20:53):

I think there can be some misconceptions that people who opt out of AI only do so because they're Luddites or because they don't like technology when I think people have very valid reasons for saying, I don't want to participate in this. And I also think people who embrace these tools aren't also automatically trying to skip steps and cheat. I think students especially can really understand the nuance and think it critically and compare it. So I would encourage any institution to really get a handle on what's happening, whether that's talking to your faculty, if you have any kind of AI task force group. I would also recommend that institutions really bring in all different perspectives to this. I think it's really important to hear from our students what they want, not just try to guess and think what they might want, which is why I think a center like yours that is really trying to bring in students to this process is doing that kind of great work in thinking.

(21:43):

I think we need to hear from the students directly, what does AI mean to you? What kinds of courses and lessons would you want before you start mandating any kind of AI course? Because there's also logistical reasons why we have to be careful on what we do there. And I would also say to really lean in to our disciplinary faculty and thinking through how AI functions in their disciplines. AI is not a monolith, just like writing is very discipline specific thinking in terms of what we do, how we do what it looks like. I think AI uses as well, even if you want to break it up into what humanities folks might think about it versus more technical fields. I think people are going to need different things. And I also think AI might look different, different levels. So in your early level classes versus when you're getting into the majors, what about graduate students?

(22:27):

That's a whole other population of learners who have different levels of expertise, different needs, different professional goals. I think it's important to really think through all of these different nuances and bring in our students, bring in our faculty, bring in our librarians who are also doing really incredible work at Miami. Our librarians are on top of it. They've been researching ai, they've been creating lesson plans that faculty can put into their classes. All of us together I think are needed to really fully understand the implications of this and to make more informed choices that are based on the perspectives of all those involved as opposed to the university perspective we're going to go with.

Nolan Schultheis (23:02):

What techniques would you recommend to students when using generative AI to support their writing? And should it be kept strictly as a foundational tool as you had mentioned, or could it be used for other potential roles? In the writing process?

Mandy Olejnik (23:15):

I would encourage students to think really carefully about why you might want to use ai or also if you even need to, again, when you're thinking about it for schoolwork, not every assignment is maybe as concerned about some of these more polished pros. I know a lot of folks have discussion posts and they have reading reflections and they have, give me your opinion, your idea about this. Is the goal of that really that you have the most perfectly thought out well arranged sentence, or is it, here is my thinking as a reader, this is my reaction, here's where I'm going. If that's the case, maybe you don't really need to use these tools. Why would you want to use these tools? I think try to check your gut impulse. If you're worried that you're not going to be doing it well enough, maybe sit with that a little bit and say, well, what is well enough look like?

(24:00):

What is the goal of this assignment? How have you been doing in the course so far? Because I worry a little bit from what I've been hearing from students and seeing in the survey that we've given out at Miami is that there might be this insecurity that can come around with it of feeling like, well, I know it's probably not good, but I still consult with these tools. I don't trust that my writing is good enough on its own. Or it looks so much prettier when chat GBT writes it when maybe that's not even the purpose. And what does that even mean? Having clear communication. You can write really great stuff even if it's not as polished as what a tool would make. And again, that might not even be the point. So I would just caution students to think through why you want to use these tools.

(24:38):

If you're using it because you're inquisitive and you're like, I wonder what this would say. And then also be critical with it, right? It's not correct all the time. It is not think it doesn't know all the same things that you've been learning about in your classes. Take it with a grain of salt, right? Maybe compare it to something else. Maybe compare it to feedback you've gotten elsewhere. Again, go to a writing center, go to a peer and say, Hey, can you take a look at this? How does this feel to you as a reader? It's good to really triangulate in that way all the different feedback that you're getting. And I also think it's just helpful to check in with your instructor on what their AI policy is. I know at Miami, different instructors have different policies and sometimes it might be listed in the syllabus and they might not even explicitly say what it is, but typically in a course, you are held to that policy that's like your contract.

(25:23):

So be clear with what and how you're going to be using it. And don't be afraid to ask follow-up questions, say, Hey, what if I use it in this way versus that way? And that might even invite a good discussion with your class. How do we feel about these tools? How can or should we use it for this project? I think on both sides, for students and faculty, having a conversation thinking through the why of using this tool or not can be really helpful to help you both accomplish the goals of your assignment, but also just have a better sense for yourself how to use these tools, how to have a process, how this tool might or might not be part of that process.

Nolan Schultheis (25:55):

The mention of the discussion post is pretty funny to me because I have had this exact experience in class before where I see just from being behind someone, them using AI for something that's a personal opinion, and that just boggles my mind. I don't understand the thought process behind wanting a machine to give your opinion for yourself, but I guess I were aware that it happens and I don't know what exactly needs to be done in the future to try and sway students away from that, but it does seem to happen. It's a weird thing that people are letting machines decide their opinions for them at this point.

Mandy Olejnik (26:37):

And I think I can understand it, and to some, if you're worried about getting a good grade, if don't believe in what you have to say. If you want to look a certain way, I see the temptation. But I also think as from the instructor viewpoint, I think it's my job to help students know it's okay for you to be unquote wrong. You can just have an idea. And I also try to experiment. What if they're not typing something? What if it's just open up canvas? Start recording yourself. Give me a two minute, however messy you want it to be. Tell me what your thoughts were, what your ideas are, how I can help you as we move forward with this. And it feels lower stakes because you're not writing a perfect response, but it's also more authentic, I think. And sometimes I'll also respond in the same way.

(27:18):

I'll respond and say, Hey, Nolan, it was really great hearing about your paper. One thing I'm wondering about, what if you do X, Y, Z? And sometimes just having that connection with each other can be really useful. And I think sometimes the idea of a written product can feel really intimidating. And if we can try to change that conception, if we can make it feel less imposing and more inviting, I think that's the work that we're all trying to go through. And again, that AI is really bringing this to light in different ways. We've talked about this before ai, but now I think it's even more important to reference it and try to support faculty in thinking through that.

Jessie L. Moore (27:52):

One of the themes that I'm hearing in your responses that I also appreciate is being explicit about our goals for assignments, our goals for activities, so that it's just as we're making sense of what is it we want students to be able to know and do, articulating to our students what those decisions have been and why, what we're trying to achieve, what the goal is of the tasks and the learning experiences. And I also really appreciate your nod to feedback culture. So thinking about AI within that context I think is really helpful as well. This has been a great conversation. Is there anything else that you would like to share about the work you're doing with helping others understand and think about how to use AI in their writing or in teaching and learning more broadly?

Mandy Olejnik (28:38):

One thing that I always try to reinforce with faculty that I'm working with is also cut yourself some slack. I just want to acknowledge upfront that for many of us, artificial intelligence was not what we learned about in grad school. It is not an area of expertise for us. There's a whole field that delves with that. So for so many people, this is all brand new. You're worrying about this on top of teaching your classes, on top of trying to figure out your research, all these other things. I think it's important for all of us to be able to say, I'm trying my best. I'm learning some things. And also it takes time. You might start with a really careful policy that you create and you might workshop it with some students. Maybe the next semester you change it, maybe three semesters from now, you're having policy for specific assignments after you saw how it is unfolding.

(29:23):

And I think for students too, as you're learning about these tools and trying to figure out how it works, don't put too much pressure on yourself to automatically have it right to have a great amazing paper. You're learning, we're all learning and we're all trying to make meaning, and that does take time and nuance and it takes mess. I think the messiness is actually the best way that we can see and learn. And I think it can be. Again, in the age of ai, it is really easy to just focus on the product. And so I would just say that we should all be very kind to ourselves and say that we're learning and we're trying. If you're listening to this podcast, clearly you are learning about AI and thinking about it. That is great. Pat yourself on the back that's going to help you in the long run. Don't be worried if you don't have a perfect solution yet. I don't think any of us do. I think we're all just doing our best, trying to be honest and transparent, and that's all we can do.

Jessie L. Moore (30:07):

Thanks so much for those reminders. It's really putting the human first as well. So thank you. And thank you for your time with us today. We've really enjoyed the conversation. Nolan, is there anything else you wanted to ask before we close?

Nolan Schultheis (30:20):

Not so much as ask. Just as agree with you. You had said something about how you don't think AI is going to steal jobs specifically in terms of writing. And what I thought based off of what you said from there is that there's just soul to writing. There's soul to everything humans produce, and that's something a robot just can't actualize. So I thought that was just good to hear from someone else. As much as we can make the robots fancy, it's still a robot at the end of the day,

Mandy Olejnik (30:51):

So Right, right. You got that. Human heart and soul I think is part of what we're all going for.

Jessie L. Moore (30:56):

Well, thank you for the conversation today. We really appreciate you taking time to visit with us, and we look forward to sharing this discussion with our listeners. So thank you. Thank

Mandy Olejnik (31:06):

You. Thank you so much for having me.

Jessie L. Moore (31:16):

Nolan, what stood out to you that you think students should think about?

Nolan Schultheis (31:21):

A lot of the common theme that I was seeing throughout what we were told was that AI needs to not be regarded as a one-stop shop solution in the sense that it should be used more as a tool and an assistant as opposed to the final thing you're going to do. I was going to say, I know Dr. Olejnik said you can still use ai. If you're working on your paper related night and you don't have someone to bounce ideas off of, that's still fine. But don't rely on that to be the only person that you are getting ideas from. Use it, get a general sense of what you want to have going into the next day, and then speak with an actual person because you can't rely on this thing as a total tool because it's just not as developed yet. And there will always be more value in human to human interaction.

Jessie L. Moore (32:16):

And we heard that emphasis on the authenticity, what you call the soul, the heart and soul that people bring to their work and activity, and that when we're thinking about feedback, that AI can be part of that feedback culture. But as you're reiterating, we need the people involved too, whether those are writing center consultants or other peers who are supporting our writing process or the faculty and staff who have given us projects to work on that, we should be still connecting with those humans in the system. Now, I also appreciated the reminder that not everybody needs to use ai. There are very valid reasons for not wanting to or opting to take a different route. It is a tool, and there are other tools and resources that we can also draw on. And sometimes if we want to prioritize learning, then the AI might not be the right mechanism. Sometimes we just need to grapple with it ourselves and work through that learning process. And so deciding and really having some agency to decide, but also resources to facilitate that decision of when to use ai, when not to use ai and why.

Nolan Schultheis (33:34):

Yeah. Just to add on to that, one final thing I was thinking about was how important it was looking at the process of what the ultimate goal of an assignment should be, especially with how prevalent AI is now. And I know I had mentioned that I think AI is kind of rewriting the way assignments have to be made for the very way assignments were structured initially. And it's that you kind of go off of prior knowledge and you need to build and you need to think, and it's that thinking process that has just been the way people have been tested their whole lives. But now we have a machine that does the thinking process for us, and we're just giving an answer to get the credit, but ultimately, that's not going to really teach either the educators or the students anything. So the process of which the information is found and the end goal needs to be changed and really scrutinized, especially with the way AI is developing.

Jessie L. Moore (34:37):

Absolutely. And with that attention to what are we trying to learn? What learning are we trying to support through our teaching, and then adjusting appropriately. Once again, I'm Jessie Moore.

Nolan Schultheis (34:57):

And I'm Nolan Schultheis. Thank you for joining us for Making College Worth It from Elon University Center for Engaged Learning.

Jessie L. Moore (35:03):

To learn more about artificial intelligence and engaged learning, see our show notes and other resources at www.CenterForEngagedLearning.org. Subscribe to our show wherever you listen to podcasts for more strategies on making college worth it.