Making College "Worth It"

AI and Engaged Learning in Higher Ed

Episode Summary

Amanda Sturgill shares insights from her ongoing exploration of generative artificial intelligence (GenAI) and engaged learning in higher education.

Episode Notes

See our full episode notes at https://www.centerforengagedlearning.org/ai-and-engaged-learning-in-higher-education/.

In this episode, Amanda Sturgill shares insights from her ongoing exploration of generative artificial intelligence (GenAI) and engaged learning in higher education. Dr. Sturgill is the 2024-2026 Center for Engaged Learning Scholar and an associate professor of journalism at Elon University. We discuss the potential benefits of integrating GenAI into higher education teaching and learning activities, as well as tips for students, faculty, and staff who are navigating this quickly evolving technology.

This episode is co-hosted by Jessie L. Moore, Director of Elon University’s Center for Engaged Learning, and Nolan Schultheis, a second-year student at Elon University, studying Psychology with an interest in law. Nolan Schultheis also edited the episode. Making College “Worth It” is produced by Elon University’s Center for Engaged Learning.

Episode Transcription

Nolan Schultheis (00:05):

Welcome to Making College Worth It, the show that examines engaged learning activities that increase the value of college experiences.

Jessie L. Moore (00:11):

In each episode, we share research from Elon University's Center for Engaged Learning and our international network of scholars. We explore engaged learning activities that recent college graduates associate with their financial and time commitment to college being worthwhile.

Nolan Schultheis (00:26):

I'm Nolan Schultheis, a second-year student at Elon University, studying psychology with an interest in law. I'm the Center for Engaged Learning's podcast producer and a legal profession scholar.

Jessie L. Moore (00:36):

And I'm Jesse Moore, director of Elon Center for Engaged Learning and a professor of Professional Writing and Rhetoric.

Nolan Schultheis (00:42):

In this episode, we'll explore the integration of artificial intelligence and engaged learning. We'll talk with Amanda Surgill, associate Professor of journalism at Elon University and the Center for Engaged Learning Scholar, focusing on the intersection of artificial intelligence and engaged learning in higher education. Let's meet our guest.

Amanda Sturgill (01:06):

So hi, my name's Amanda Sturgill and I'm an associate professor at Elon where I teach courses in journalism and media analytics and used to teach in our interactive media graduate program as well as the university core. My first research area and my major area in graduate school was the intersection of new communication technologies and different parts of society. So that would probably be where I got the most interested in this new technology and how it's affecting the educational space.

Jessie L. Moore (01:34):

Obviously AI in general is a really hot topic right now. What most intrigues you about AI in engaged learning activities?

Amanda Sturgill (01:44):

So I think AI has a number of intersections with engaged learning activities, but I also think that it is a very disruptive technology and because of that, the sort of way that it's going to get rolled into universities when it sort of encounters the glacial pace with which most universities make change and the extremely kind of collaborative decision making that happens at universities where people have a whole lot of autonomy and we have a lot of wonderful different voices to consider makes it a really interesting problem, particularly since AI has been such a kind of buzzy topic in the mainstream media coverage with people saying it's going to fundamentally change everything and other people saying it's the worst thing ever. And so just all of those intersections I think are very interesting before we even get to the practical applications.

Jessie L. Moore (02:32):

And we see that range of reactions even at the department levels where you've got folks who are early adopters and others who may never adopt it. So it's really interesting to see that variety.

Amanda Sturgill (02:47):

In my own department, which is about 10 faculty right now, we have everything from faculty who are like, I would never use that. I tell my students, just don't do it. Kind of quoting Nancy Reagan in that one all the way up to a faculty member who proposed completely revising our curriculum around AI supported journalism. So just a huge, huge variety of opinions.

Jessie L. Moore (03:10):

And before we go too much further, I should also note that you are the Center for Engaged Learning Scholar who is focused on AI engaged learning and you've been writing about it a lot for our blog. So we will link to your blog series in our episode notes so that folks can dig even deeper into the topic as they get interested. Based on our conversation today,

Nolan Schultheis (03:35):

Do you think AI will lead to changes in how college students learn or their critical thinking abilities?

Amanda Sturgill (03:42):

I think AI is in some ways already doing that. I think that it can lead to changes in how college students learn. Most of the discussion around that has been on the negative side, which I think is interesting. The very first reaction from academia to AI was, oh no academic integrity. Because you have this idea of you have a magic answer machine. And because the generative AI that was released with chat GPT and then its successors is good at creating something that looks on the surface, the kind of work we might ask students to do as a way of assessing their learning or their skills. A lot of people were very afraid that students were just going to wholesale take that over and saying things like the essay is dead and those kind of things. So in my own experience, which I will grant you as limited at a private institution where I do teach smaller classes than some other people, I would say I don't see a whole lot of magic machine kind of submissions that I'm seeing come in.

(04:48):

But one of the things that I've had to think about is what really constitutes a meaningful assessment of learning now, right? Because at one point an essay was a good way to sort of see somebody's ability to take an information, synthesize it, draw new conclusions from it, and then express it appropriately for a particular audience. And that because AI is so good at doing that right now, expressing it appropriately part may not be what we need to talk about anymore. So you're starting to see in the literature more people who are talking specifically about the assessment process and sort of the goals of assessment. I will also say though that because a lot of my work as the Center for Engaged Learning Scholar is around looking at stuff that's in the literature, the relatively slow pace with which literature gets created is kind of an issue.

(05:39):

It's kind of a weird thing for me because on the one hand I'm reading things like industry publications like Chronicle Higher Education or even posts on Reddit. There's a professor's subreddit that talks about AI a good bit and the stuff that they are talking about now is way ahead of what is up in the literature, which is kind of reflecting where we were 18 months to two years ago, right at the very beginning of things. So you're kind of seeing the early adopter faculty perspective in the literature and you're seeing sort of the present day perspective now.

Nolan Schultheis (06:11):

So I will say as a student currently involved in the whole AI kind of epidemic, I would say it has led to probably a drop in honestly my reading comprehension of all things interestingly, because students will get assigned reading passages and now AI has the ability for you to just upload the file and it summarize it for you. And I know at least in my experience that I will use that if I am just not in the mood to read and I know a lot of other people do as well. So I think it's probably ultimately killing our attention spans in a reading comprehension is at least the angle I think it's most instrumental to.

Amanda Sturgill (06:54):

Yeah, that's interesting. I think some of that actually comes back to the assessment question again, because if the thing that you're assessing from a student completing a reading is did they get the information from the reading, then what you're doing is highly appropriate for doing that. You're just getting the information in a more specific way. But if what you're assessing is your ability to deconstruct the argument in the reading or assess the value of the sources in the reading or those kinds of things, maybe you need to ask those kinds of questions.

Jessie L. Moore (07:24):

And that highlights, I think that as faculty and staff working with students, we need to continue to teach how to read critically but maybe change what we're focusing on. So if you are using AI to summarize and maybe teaching you to also skim headings and to read the abstract and the introduction, the conclusion and see if it matches up with the AI summary and then also give you guidance on when it might not be appropriate to rely on the summary, and we might want you to read a little bit more closely, but I'm also struck Nolan by your use of the word epidemic because I think that captures some of the tensions around AI in higher ed right now, that it is something that is having a widespread impact and we're still sorting out the positive implications. I think that there are many, but I hear in that there is still some hesitation to embrace ai. So that's an interesting word choice that I just wanted to call attention to briefly.

Amanda Sturgill (08:37):

I think we're also seeing the sort of innovation of AI kind of coming in at the same time that we're seeing a generational shift in students. And the reason I say this, I've always been kind of grateful in the last 10 years of my university teaching career that I've had my own children. So I have a daughter who is a senior in high school, and I was really surprised to learn this year that this year, her senior year in high school was the first time that she's had an entire book printed on paper that she's gotten as a part of her education process, which is just wild to me because I remember in fourth grade that we were handed out the textbook and we did a unit on here's a book and here's how a book works and here's what a glossary is and here's what a table of contents is, and this is the call out box and it's interesting stuff, but not the important stuff where they would've put it in the text.

(09:31):

All of those different kinds of skills, which I think faculty assume students still have, and I don't think students still have kind of writing by hand. That was another conversation that I had with my daughter, but about how starting with her generation, so class of 2025 kind of thing, she was never taught how to physically do the active writing in school. So that was not part of the instruction. But again, back in the days, some dinosaurs were the earth and I was in school. We had Ms. Quigley come in fifth grade and we all spent half a year once a week handwriting instruction, here's how to shape cursive letters and here's how to space between the words and those kinds of things. She was never taught any of that. And then she went to high school in the age of AI where a lot of faculty are worried about assessments and the kind of assessment they know how to give as an essay.

(10:23):

So she's now required to write by hand essays in class because that's like the AI proof way of doing that. And that's so difficult for her and for most of her peers. I actually had my own students in my freshman class last year. I had 'em do an in-class writing thing where I made 'em write on paper, which was actually really useful to me because then I was kind of able to later tell when they were using AI support in their writing, I could kind of see where they were at. And it was very surprising to me the extent to which they lack facility and just forming letters on a piece of paper, spelling, those kind of things, like a lot of that kind of stuff was not taught to the students that we have now, but we kind of assume that it was at some point. And so if you have a faculty member coming in on one end having decided that students should already know all of these things from high school and students coming in and not having that, I don't think it's all surprising that they're choosing to use a helpful tool to try to meet the expectations.

Jessie L. Moore (11:16):

Absolutely. And by extension of that, we already know that faculty make assumptions about the digital technologies that students come in knowing. And yes, they've had exposure to many of them, but they, despite the nomenclature, really aren't digital natives. They're comfortable with some of the aspects of the technology, but often don't know the advanced and techniques or abilities of programs that we might assume that they come in knowing and really need to give time to teaching.

Amanda Sturgill (11:50):

Yeah, I would absolutely agree with that. I feel like, and this was kind of a pandemic thing, so relating epidemics back to epidemics, we are getting students now here from the Chromebook and iPad generation, and both of those technologies are great, inexpensive and easy for schools to access, and they were absolutely essential during the pandemic when everybody went to school online for a year. But they do make it difficult to know things like what a file is or where a file is or basic technology literacy kinds of questions.

Nolan Schultheis (12:16):

I was lucky enough as a student to have been taught pre ai, it came more into fruition I'd say towards the tail end of my senior year. And I know you had mentioned the handwriting aspect. I actually have a funny example about that. It was my senior English class and we had an assignment like that where we had to write, it must've been a quiz or something, but there was a written aspect of it where it had to be handwritten and it had to be a page long. And I remember about half of the students in the class complaining that they didn't have time or they weren't prepared, and I didn't struggle with that at all. I wrote, and I'm thankful that I was able to have that foundation instilled prior to AI because now I feel like the way I interact with AI is far different compared to someone who doesn't necessarily know how to structure and write in more of a academic way in terms of handwriting.

Jessie L. Moore (13:24):

And I think that that is also getting us back to what are our goals for assignments and what are we trying to measure? Students are learning, and certainly I teach writing classes, and so that having that prior knowledge about writing structures and strategies is really helpful in that context. And we're certainly seeing both students sometimes using AI tools to help them draft things. So we're actually in the program that I teach in, we are thinking very deliberately about teaching students how to use the tools ethically and effectively because they're there and in our industry we are hearing feedback from students who have been in internship sites that their industries are also expecting them to know how to use these AI tools. But I know in one of your blog posts, Amanda, that you saw some discrepancies in one of the articles that you read between what industry leaders were anticipating and what workers were actually using. Could you briefly explain that gap and then maybe talk about the implications for higher education if we think about how that transfers to our setting?

Amanda Sturgill (14:47):

So I feel like AI offers, I'm just thinking about my own industry. So the journalism industry, and so if you are a publisher in journalism, you are always worried about basically reducing costs. And so the people that you employ to actually do the fact finding and write the stuff are your highest cost usually, especially if you're publishing online sort of thing. Used to maybe be paper and electricity, but now it's often to other people. There is a lot of journalistic writing that honestly is algorithmic. I remember back when I was Nolan's age and I was a college intern at the New York Times talking to the guy who wrote the bonds report. So every day he would write a story about here's what the bonds market did. Yesterday I worked on the business section and he was showing me and he said, someday this is going to be done by a computer.

(15:37):

Honestly, I write one of three kinds of things. The market went up, the market went down, the market stayed the same, and then here are the five things that changed the most. Right? And that is absolutely true that computers write those stories now. They write real estate stories, they write financial stories, they write some sports stories, those kind of things. And from the perspective of a publisher, that's super beautiful because it saves you a lot of the money that you're having to put in there. And we always have to remember that the media or business, and so the publisher's goal is to make money off of the news product that they sell. I think a lot of managers see it that way, that they see AI as a great tool for enhancing productivity and eventually having to pay fewer people work. On the flip side though, so this national study that I read found that many employers and managers of they interviewed and managers and workers, many managers felt like AI was being well integrated into their company and they were working it into their work processes and those kinds of things.

(16:31):

But if you ask the workers, they were like, yeah, not so much. Maybe I tried to get the AI to do this part of my job and it just doesn't do a very good job with it because it doesn't know the same things that I know it does something that looks like a good job, but it's not actually a good job, which is a kind of common AI outcome. And for us in higher education, I think it's going to be like what we see with our strategic communication students when we send them out to internships, it is super common for our strategic majors to go out to their internships and be asked to take over, run, do their social media for the companies that they go to. Because again, the companies are making an assumption that the students are very good at this stuff and it's a new technology with a learning curve, and the people who are already there maybe are not as comfortable with learning how to do it.

(17:20):

Little side note, this is a grievous decision if you are a company and you definitely should not take your newest person who doesn't even know what working is and have them be your public face, but they're assuming that the students come in with this ability and this knowledge. I think we're going to see that as well from jobs that are looking to hire new graduates, that they're thinking that the new graduates are going to be the ones who are really facile with this technology and able to bring it in to the company, the sort of training and all of this stuff about how to use it, when to use it, why to use it, what it's good for is going to happen while they are in school,

Jessie L. Moore (17:54):

Which really reiterates why we're having this conversation and why we need to be attentive to it.

Nolan Schultheis (18:00):

So what advice about AI would you give to students who are listening to the podcast?

Amanda Sturgill (18:05):

I would give a couple of pieces. First of all, your faculty are going to be considering the idea of AI in really different ways. And so it is a good idea to have open conversations about that and not make assumptions. Even things that you might assume are perfectly fine, like using Grammarly as support for example, that's widely, widely used by my students. Some faculty are going to be like, no, that's terrible. And that's keeping you from learning what you're supposed to and others are going to be like, yes, please don't turn in things where the subjects and verbs don't agree, right? It depends on you asking. So you need to ask for that. That would be the first thing I would say. I think the second thing I would say is that it's good to remember that AI is not different from most other tools.

(18:54):

Tools have things that they're good at and things that they're not good at. I'm not going to use a hammer in my kitchen, for example. So it depends on what my goal is. And this is asking for a lot of intellectual agency and maturity from students that may not be reasonable, but if my goal is to learn how to construct an argument, for example, and my assessment on that is going to be the thing that I create at the end, if I get something, if I use a tool to construct my argument for me, I may miss on learning that important thing. And so if you've got well-written syllabi in your classes, well-written assignments in your classes, you will probably have some kind of learning objectives that are part of that. And it is useful to think through that little bulleted list up at the beginning, the learning objectives, think through that and see if I apply this tool to this task, I meet those objectives.

(19:52):

Because even if I could get maybe my desired outcome, which is a particular letter grade or score on the assignment, if I don't meet those learning objectives, that may have a cost for me in the future, and it may be the near future when I'm asked to then take what I've learned and apply it in a new context in the same class, it may be a long-term objective when I'm asked to take the stuff that I've learned and apply it in life or on the job or something like that. But you may end up sort of inadvertently harming yourself if you focus on the assessment that's provided by your faculty member in part because I think faculty members are very early in the process of rethinking assessment with AI as an available tool.

Jessie L. Moore (20:33):

And the second suggestion that you share there makes me think also that from a faculty staff side carrying those learning objectives over to our assignment guidelines, it would be another easy step to help students make that assessment of whether AI is really going to support their learning in this moment or if it might work against the learning objectives and not something that we always do, but only takes a tiny bit more space in our assignment guidelines.

Amanda Sturgill (21:02):

I was going to say I don't know that students always read those, but I mean I do try to include them in most of my assignments and it's good. I think also to sort of talk about them throughout the process of doing the assignment. So when you assign it saying, here's the stuff you should be able to do or no, or think when you finish this, and then maybe kind of in the middle of the process saying, okay, so you should have gotten this far on it. And so how is this getting us toward our goals for this assignment or for this class?

Jessie L. Moore (21:33):

And also what you highlighted in your earlier conversation of the transfer goals too. If they learn it now, where might they use it in the future? Why might they use it in the future? Those types of things as well. So you've already shared a couple recommendations that we've highlighted for faculty and staff. Are there other things that you would share with universities and educators as they consider integrating AI into college curricula?

Amanda Sturgill (22:00):

When it comes to any kind of technology integration, I am a big fan of having what I think of as evangelists for those. So we've usually got our kind of standard diffusion curve and we've got our early adopter people. We've got the people who are like, let's change our whole curriculum around this technology kind of thing. Those people usually have taken the time, have had the resources to sort of play with the new technology that comes out. And this is true for AI as well, and having somebody that is familiar with your discipline or at least sort of your broader discipline, even if it's just like social sciences or foreign languages or something like that, it doesn't necessarily have to be another German professor if you're a German professor, but having somebody who's kind of thinks in the same spaces and is familiar with the tools that can be sort of an informal consultant, I think is really valuable.

(22:54):

And I think universities ought to work in that direction because we have this idea of academic freedom and people can decide what best way to teach their courses. I feel like we sometimes don't provide spaces for academic consideration for having sort of these big discussions about things. And even I see this at my own university that we have a lot of focus on tools, and so here's this tool and you can use this tool and here's this tool and you can use this tool and those kinds of things, but we don't talk as much at the faculty level about how this might affect the way that we learn and the way we help our students to learn. And facilitating those spaces would be really helpful.

Jessie L. Moore (23:35):

That's a great suggestion. Nolan, do you have any other questions? Or Amanda, is there anything that you were hoping to add to the conversation today that you haven't yet had a chance to?

Amanda Sturgill (23:44):

One, I would add, as I've been sort of looking at the literature and kind of following things in the popular area, the thing I am most concerned about is the development of expertise. I think that for a lot of applications, in order to use AI well, you need to already know how to do the thing you're asking it to do for you so that you can assess the output and see if it is good or bad. We need to think more about how we develop that expertise, and I'm saying this both for academia and for sort of industry and society at large that because we're sort of in this liminal space right now where we've got this disruptive new technology and different people think it is different things and stuff like that. I'm concerned that the students who are graduating right now have not developed the ability to realize when they're an expert and when it comes to getting these outputs from ai.

(24:40):

And we need to think really intentionally about what we're going to do with that because employers are not hiring entry level people the same way that they used to. If you look at job ads now, it was always like a joke, like how is this an entry level position when you want a year of experience and now they're wanting three to five? And if you think about the three to five that is before AI or before generative AI was commonly available. So they're wanting people who have that expertise that was developed without the tools, those people will eventually retire and die and those kind of things. And if we sort of skip a generation here, how is society going to work?

Jessie L. Moore (25:15):

And I think that also connects back to one of Nolan's earlier questions about the impact on critical thinking that we both need to help students and employees develop their expertise and then also help them develop their critical thinking skills so that they can analyze how outputs compare to what they know about a topic or a context and situations. So lots of good nuggets to think about and such a fun conversation. So thank you very much for sharing your time with us. And again, as just a reminder for our listeners, we will link to your blog post on the episode notes so that folks can follow up there as well. But thank you for spending some time with us. We appreciate it. Awesome. It was great to be with you. So Nolan, what stood out to you in the conversation with Dr. Sturgill today?

Nolan Schultheis (26:14):

The idea that students using AI as a crutch or relying on it too much as a tool can be detrimental to a lot of things, specifically work experience, just academic experience in general. There's certain things that are taught to us during our earlier years of education that are essential to functioning and making it through college. And AI has kind of circumvented the process of needing to teach those things. And students that are relying exclusively on AI will most likely end up struggling in the future.

Jessie L. Moore (26:56):

I appreciated her reminder at the end that we need to develop expertise in our areas, and that was really threaded throughout our conversation. And that ties to what you were just saying, that we have to develop the expertise so that we can evaluate what AI outputs and whether it's accurate, whether it fits the situation, et cetera. And I also appreciate her reminder to faculty and staff not to make assumptions about what knowledge students bring into the classroom with them about ai. That if we are assuming that students should be using it, we need to make that clear. And if we think that students should not be using it because we want them to focus on learning the content in ways that AI use would perhaps jeopardize, we need to communicate that as well. So both the idea that there's expertise that students and business context employees need to develop, we need that critical thinking so that we can evaluate whether the outputs are appropriate and then we need to think about when it's relevant to use AI versus when it might not be. Once again, I'm Jesse Moore.

Nolan Schultheis (28:17):

And I'm Nolan Schultheis. Thank you for joining us for Making College Worth It from Elon University Center for Engaged Learning.

Jessie L. Moore (28:24):

To learn more about artificial intelligence and engaged learning in higher education, see our show notes and other resources at www.CenterForEngagedLearning.org. Subscribe to our show wherever you listen to podcasts for more strategies on making college worth it.