Making College "Worth It"

Perspectives on Using Generative AI in College Assignments

Episode Summary

In this episode, Aaron Trocki and Alyssa Collins talk about the use of AI in college math assignments, looking at AI's potential to help learning and the ethical concerns it raises.

Episode Notes

See our extended episode notes at https://www.centerforengagedlearning.org/perspectives-on-using-generative-ai-in-college-assignments/

This episode explores what use of AI integration in collegiate assignments can do for student learning. With guests Aaron Trocki, a math professor at Elon university, and Alyssa Collins, who is studying to be a secondary education teacher and worked through one of Trocki’s assignments, we explore the use of AI in a math assignment and the outcomes for student learning.

This episode is co-hosted by Jessie L. Moore, Director of Elon University’s Center for Engaged Learning, and Nolan Schultheis, a second-year student at Elon University, studying Psychology with an interest in law. Nolan Schultheis also edited the episode. Making College “Worth It” is produced by Elon University’s Center for Engaged Learning

Episode art was created by Nolan Schultheis and Jennie Goforth. 

Funky Percussions is by Denys Kyshchuk (@audiocoffeemusic) – https://www.audiocoffee.net/. Soft Beat is by ComaStudio. 

Episode Transcription

Nolan Schultheis (00:08):

Welcome to Making College Worth It, the show that examines engaged learning activities that increase the value of college experiences.

Jessie Moore (00:15):

In each episode, we share research from the Elon University Center for Engaged Learning and our international network of scholars. We explore engaged learning activities that recent college graduates associate with their financial and time commitment to college being worthwhile.

Nolan Schultheis (00:30):

I'm Nolan Schultheis, a second year student at Elon University, studying psychology with an interest in law. I'm the Center for Engaged Learning's Podcast producer and a legal professions scholar.

Jessie Moore (00:41):

And I'm Jessie Moore, director of Elon's Center for Engaged Learning and a Professor of Professional Writing and Rhetoric.

Nolan Schultheis (00:46):

In this episode, we'll explore using AI to support student learning. We'll talk to Dr. Aaron Trocki, an associate professor of mathematics at Elon University and the 2023 to 2025 CEL Scholar, focusing on models of assessment and feedback outside of traditional grading assumptions and approaches. He's joined by Alyssa Collins, a second year student majoring in math for secondary education at Elon University where she's part of the Teaching Fellows Program.

Alyssa Collins (01:20):

My name is Alyssa Collins, a second year at Elon University. I majoring in math for secondary education. I'm really interested in AI and learning more about it, participated in some focus groups about it. It's a really interesting topic of choice, especially when it comes to our education system.

Aaron Trocki (01:40):

I'm associate professor of mathematics here at Elon University. I specialize in mathematics education. I'm the program coordinator for the secondary mathematics program, so I work with a lot of pre-service, middle grades and high school math teachers. So I see generative AI as something that's here to stay, and I feel that it's a professional responsibility to learn how to use it in productive ways for teaching and learning and mathematics and in higher education and for that matter, secondary education in general.

Nolan Schultheis (02:15):

Do you think that students went into these AI assignments assuming they'd be easy because they thought they had the crutch of the ai?

Aaron Trocki (02:23):

Yeah. Alyssa Collins was one of the students in a calculus one class that I taught in fall of 23, and in that class I piloted what I called an artificial intelligence supported assessment, and it was basically a writing project in which the students used generative AI chat GPT, to explore and learn about concept and application in calculus. And then students gave me feedback in the form of a survey and then also participated in a focus group about their experiences with generative AI in general and for learning and mathematics. So with that being said, maybe Alyssa, can you scroll back in your memory to fall of 2023 and then maybe recall what you did in that assignment? And I can maybe jump in and talk about some of the expectations or my thoughts and how I designed it and what I wanted students to experience, especially with the generative AI use from that. But Alyssa, maybe give us your take on it and maybe what you remember about that particular assignment.

Alyssa Collins (03:41):

Yeah, so it was different from more typical projects that you do in a typical math class. It wasn't just a textbook. We were actually creating something, having a conversation with ai, which I think is really cool because as a student, you're able to talk to AI and ask questions without fear of being shut down or that feeling that you're not good enough at it. It really helped with creativity because the assignments take anywhere with ai. I think one of them was about we had to create a lesson. I think that's what it was. As you were teaching a student, you were writing an email to a student or a friend about the topic that we were learning in class, and that really helped with the understanding of it more than most assignments because you actually have to break down. You have to know what you're communicating. And I feel like that helped me a whole lot in that class with understanding of the concepts.

Aaron Trocki (04:45):

Yeah, thank you, Alyssa. And let me just maybe add some more prompts that can give Alyssa something to expound upon. So that assignment, Alyssa and I had to remind myself as well just now, but with that particular assignment, we looked at how calculus can be used for optimization problems. So I think one of the big takeaways was that students named a particular area of interest, so something like, I don't know, marketing, soccer, motion pictures. And then they use generative AI to see how that technology can produce examples of how calculus is applied in that particular area. And I'll give you one of my takeaways was a big aha or wow moment that generative AI produce 20 something different examples based on my analysis of student submissions of how calculus is applied in different ways. And I thought back on when I've taught this topic in previous semesters, and I usually only give maybe two to five different areas where this calculus application takes place. And so generative AI took my count of two to five and upped it to 20 something different applications. So I think in that way, we made the mathematics more relevant to individual student interest. Now, Alyssa, I don't remember exactly what you looked up for that, but if you recall what you did, it would be amazing if you can maybe tell us a little bit about your area of interest you picked for that assignment or maybe just something else you remember about your work there beyond what you already gave.

Alyssa Collins (06:37):

I believe I did something along the lines of making a lesson plan, I believe it was, but it was really cool because I felt that I was interested in it, so it made me more excited to actually get it done. I know when it comes to math, it becomes really hard to become motivated to do some assignments, especially if you have lost a little bit of confidence in it over the years. But this assignment, it really sparked that interest because I'm focusing on something that I actually enjoy, and I feel like that was reoccurring theme throughout some of my peers in that class.

Jessie Moore (07:16):

I love hearing that it boosted your confidence and also reengaged your interest. That I think is something we don't typically think of AI as a potential benefit for. I will quickly say that for our listeners who are interested, Aaron has been writing blog posts about this project for the Center for Engaged Learning website. And so in our show notes, we'll link to a couple of those blogs so folks can read more. But I love the specific examples that you've offered and I think they'll offer a nice context for some of the additional questions that we have for you all today. One of which is I want to go back to part of Nolan's question and just hear from both of you. Aaron, how did you think students were going to respond to this? And Alyssa, did you think that this was going to make things easier? Did you have hesitations? What was your initial reaction if you can remember, to being asked to work with ai?

Aaron Trocki (08:19):

My, I guess, thinking going in was I'm not sure how students are going to react. I was unaware of to what degree students use generative AI in their academics before piloting some assignments, some assessments with them. And it also felt a little bit like asking students to do something extra, even though I try to couch the assignment as this is a great learning moment for you. So I went in with some hesitancy and not exactly feeling confident in my predictions on how students would respond, not to steal any of Alyssa's thunder, but I feel like students were re-engaged at that point in the semester when they're doing that assignment and after, because they saw how the mathematics we were learning can relate to things that they're interested in. And I feel like generative AI was kind of the bridge to make that to happen.

Alyssa Collins (09:16):

Yeah, I was about to say, as a student, there's often a stereotype, a negative stereotype against using AI that when we use it, we're not supposed to use it because we don't generally use it in our assignments in our everyday class. So going into this, there was a little bit of, I feel in some of the assignments where students were a little bit fearful of they didn't want to get caught with plagiarism where, so there was a little bit of hesitation I feel at that point of ai just because you're conversating between ai, you don't want to get caught in or in trouble plagiarizing anything or anybody else's work. But I feel that generally students were very open to the assignment just because it's something new that we haven't really experienced. And as Dr. Trocki was saying, reengaging that area of interest and really pushing forward with that as the lead.

Jessie Moore (10:15):

That's awesome to hear. And I think that's actually a great lead into a question that Nolan has for you as well.

Nolan Schultheis (10:21):

Yeah, and before I ask that question, I just wanted to say something that I found interesting based on what I heard from you, Alyssa, is that there's kind a no fear of a stupid question when you use the ai. And I think that's great for students in particular because whenever you're trying to learn something new, of course you have a million questions and you're sitting there sometimes in class thinking, I don't know, maybe someone else will ask this question and then they never do. So you're kind of left just wondering, well, I probably should have asked that question. But now with the access to things like AI just being so much more prevalent, I think it kind of makes the barrier to asking questions and learning a little less of a barrier Based on your experiences. Do you think the addition of chat GBT is okay if that means it seems the students understand the material better or do the benefits not outweigh the cons?

Alyssa Collins (11:12):

Say for students in particular, I feel like there has to be a balance. You can't fully always use AI to ask everything you want to do, but if you're using AI to understand, not to just get an answer, to understand a problem and have that communication with AI to grasp a concept like repeatedly asking those questions that you're more fearful to ask in class, I feel like that is an extremely good benefit. I guess tutor almost be yourself when you're asking these questions of these hard concepts. AI is going to be as Google is now. So learning about it, understanding how it works, seeing ways that it can be incorporated into the classroom, taking away from the learning process, I feel it's very important to do.

Aaron Trocki (12:16):

And if I can interject there and maybe just extend a little bit on what Alyssa just contributed. Alyssa participated in the focus group. We did at the end of that fall 23 semester. And two of the big themes that came out of that amongst many were that generative AI gives a space to nonjudgmentally interact to start learning about a concept or an application, a topic. So that kind of is a succinct way to say kind of what Nolan and Alyssa just spoke to. Another big theme that came out though was that all the students in the focus group came to agreement that generative AI should be used as a tool for learning, not a replacement for demonstrating learning. And I think that gets at this fine line, like the balance to use the language Alyssa mentioned between using generative AI in productive educative ways versus ways that slip into honor code violations. And so navigating that line in that space in particular disciplines, I think is part of the challenge that we see ahead of us.

Jessie Moore (13:33):

And Alyssa, you're a future teacher yourself, anticipating, is it high school math that you want to teach? Do you anticipate then teaching your students how to use chat GBT?

Alyssa Collins (13:46):

Yeah, I definitely think for my peers now who are going in the education field, it's very important to know as a teacher how AI works and to help the students, our future students learn AI as well. Because while it right now, it's not really present generally in the future for them, it will be so it be a reality for them, which it's important for us now to learn so we can help them have a better future as well. And students also should learn more about AI because it's all around us, whether we think it or not. I mean, it's in our phones

Jessie Moore (14:31):

And was just on the drive in this morning hearing an NPR story on how the uptick in AI usage is actually causing more energy use that AI data hubs are using as much energy as the entire country of Italy. So I think in addition to learning how to use it effectively and ethically, I think that we're going to need to also grapple with when we use it in terms of the impact on other aspects of our world that we might not even be thinking about yet. So I love to hear though that you're thinking about it for your future profession as well.

Nolan Schultheis (15:15):

Actually recently found this YouTube video, and it was basically basically explaining that AI is kind of killing the internet, which I found interesting. What's happening is all these companies are trying to commission AI because there's just a little bit more profit in it. And because of this, AI is basically a product of what the internet is. And so it's taking this information that's misleading or wrong and reciprocating it. And I think that's just kind of another interesting aspect of the AI is that it's so helpful and at the same time, it's so detrimental. It actually contributes to what people know as the dead internet theory, which is these accounts set up on multiple social media platforms in order to exclusively make money or gather engagement on the posts. And then other AI are commenting on the posts and interacting with it. So it's just an endless cycle of the AI interacting with itself.

(16:21):

And I just find that really interesting and also kind of scary because coming from someone who I would say grew up in the perfect age of technology, started out without it and then was slowly introduced to it as the time went on, I always regarded the internet as pretty much fact. I learned over the years that there's some false information, but most of what you're looking for is verifiable. And now that doesn't seem to be the case. I mean, you use a Google search and it'll just pull from anywhere. I mean, it'll even pull Reddit answers that random people have made, and they're just wildly inaccurate as to what the situation is referring to.

Jessie Moore (17:04):

And we have AI that's also learning off of that data. And so another reason to be cautious,

Alyssa Collins (17:10):

Just to talk a little bit about your points that you were mentioning with ai, I feel that we have to remember the people who are creating the AI are human. So ai, we don't have an AI yet that can think of the objective before the programmer can give that objective. So there is some errors with it. And we saw in our class when we were doing some of the assignments and during the focus group that it didn't get all the problems math problems correct. So there's still some error, it's still growing and stuff like that. But I feel, I was talking with my mom about this. She was around when Google first, the internet was all coming around. She said she has the same feeling that the internet is sort of AI as it has this good to it, but it also has this very bad path that it could go down. So I think it's important that we start learning those boundaries and trying to set up those guardrails of between the good and the bad, because the internet is definitely a scary place that it's become, but it can also do good as well. So there's kind of like a blurry line between the two.

Nolan Schultheis (18:39):

So your input actually kind of started to answer my next question, which was what advice would you give to students who are listening to the podcast?

Alyssa Collins (18:47):

I would say definitely when you're asking AI a question or a fact or something from a statistic, double check that just like you do the internet, just like you do with scholarly resources, you want to double check everything that you are doing so that what and what you're learning is true. I feel like that is generally a good thing the students should practice with because you don't want to be wrong. So you want to make sure that what you're looking at and what you're getting back from the AI is true because programmers, they're human, they can be biased, they can put in stats that aren't necessarily true. So it's important to double check that.

Aaron Trocki (19:40):

And if I can just extend on what Alyssa contributed there. In spring semester, I gave an assignment out where students use chat GPT, and part of the assessment had them complete like a brand new math problem. And in my directions, I was explicit to not use chat GPT to do the math problem, but I suspected that some students just probably popped it in to maybe a side conversation with chat CPT to see what happened, and it gave the wrong answer. And so that reminded me of one of the themes that came out of the focus group that Alyssa participated in. And the way I summarized it was use generative AI output critically and always assess its accuracy. And then going back to that spring assignment, I asked students about how useful was generative AI in the mathematics. And the general consensus was it was useful in helping them learn the mathematics, but they did the final checking themselves. When I asked them how useful generative AI for the writing portion, the responses were much more positive because I think in the writing portion you have a more subjective element, whereas in the mathematics portion, especially in the final answer, it's obviously objective. So kind of a neat thing where disciplinary specific nature of the knowledge almost dictates how useful or in what ways generative AI is useful.

Alyssa Collins (21:16):

I was going to say it can give you the steps to solve a math problem, but the calculation part of it, it gets a little lost in translation

Nolan Schultheis (21:26):

Coming from someone who's played with the AI a little bit, especially with the chat, GPT created ais by other community members. The writing is really where I feel like it gets scary because people have made these bots now to check their bots. It's going so much deeper than I thought it would. And that's kind of the part where I'm like, eh, I don't really, I almost feel as someone who tries on their writing, I do try to genuinely get a point across whenever I write, it's not just for the sake of the assignment. I feel actually using AI almost takes more work than just using my own brain and trying to think out how to write because there's too much work that goes into getting it to say what you have in your head perfectly, as opposed to you just saying what you have in your head and trying to coherently write it

Jessie Moore (22:22):

As the writing professor in the mix. I would say that I do appreciate your caution and how you use it for that. And one of the things that we're finding in research and writing studies is that it can be really helpful for invention or brainstorming, and it can also be helpful for checking for errors that sometimes occur in your writing, which you want an extra set of virtual eyes to look for. And then the other space that it's been kind of fun to see what it comes up with is using it to help generate titles for things. They aren't always perfect, but at least gives you more to play with and choose from as you're trying to figure out what titles might capture the work that you've personally done in a written text. But again, I appreciate the caution and how you're using it, and I think that that's a theme throughout this conversation is an interest in using it, but using it selectively, using it ethically, and figuring out how to use it effectively to support learning rather than to supplement learning activities. I think we have maybe two more questions for you. One is thinking about the other part of our listener audience, what are a few of the things that colleges and specifically their faculty and staff can do to help students navigate when and how to use ai?

Alyssa Collins (23:49):

I would definitely say learning more about it because a lot of the times there's a disconnect between professors and students about kids. AI is, it's all around us. If you think about it. I mean, the AI is even in word when you are about to generate or you're typing a text and a suggestion comes up to finish the rest of the sentence, that's AI itself. So when professors say, oh, don't use ai, students are like, well, I'm using ai. It's in Word. That's the platform we're using to write these assignments. So I would definitely say making guidelines of what is acceptable and what is not acceptable in assignments. Learning more about what AI is and how it works will better help the communication between students and faculty, I feel like in the future.

Aaron Trocki (24:54):

Yeah, I think faculty need to be aware or make themselves aware of how AI is used in the disciplines they teach outside of academia. I think that's a really good starting place. So for instance, I work with prospective teachers. Teachers more and more are using generative AI to produce the skeleton of a lesson plan and then taking that and molding it into the lesson they need for their particular students. So that's I think in broad strokes, an excellent practice for how to use generative AI productively and secondary mathematics, for example. But unless people who prepare teachers the mathematics know about that use and talk to practicing teachers, they're not going to know how to prepare upcoming teachers for the demands of using that technology. So I think learn how the technology of generative AI is being used in the field in which you teach, and then have an open conversation with your students about it typically at the beginning of a semester, and maybe be open and willing to learn as you go. Certain assessments, certain assignments are going to use maybe generative AI in an excellent way to support student learning. Some might bomb out, but that's okay, right? If you can learn with and from your students just as much as your students learn with and from you, and make it more of a reciprocal process in how to use this particular technology productively, I think faculty can move forward in the most efficient, meaningful way in preparing their students to use generative AI in the class and beyond the university walls.

Jessie Moore (26:42):

Thank you both. I really appreciate you taking time to share your own experiences using AI in the calculus class and how you're thinking about using it in other spaces that you're teaching and learning. Before we wrap up, is there anything else, Nolan, that you'd like to ask or anything else that Aaron or Alyssa you'd like to add to our conversation?

Nolan Schultheis (27:05):

Yeah, just one more quick question. I know I've expressed my concern about AI. Do you think the taboo I hold and potentially other people hold is going to fade out as maybe AI develops more or we learn how to use it, become more professional with our use of it? Or do you think there's always going to be kind of a barrier in people's heads as regarding AI as this potentially dangerous and evil thing?

Alyssa Collins (27:34):

I would say definitely similar to what the internet is now. There's always going to be that fear and caution in the background of AI and what bad can come from it, but also what good can come from it. I do think as we head to the future, how we talk about it, we'll predict how it becomes in the future. So if we start learning about the ways to use it, what ways not to use it, it will help us towards that future where we're not as scared as we normally would be. And I think that it's really important to have start having conversations like this because it's a big thing. It's bigger than the internet, but it's not there yet. And I would definitely like to see more discussions about AI in the future.

Jessie Moore (28:29):

Thanks so much, Alyssa. We really appreciate you sharing your perspective. So what stood out to you in this conversation?

Nolan Schultheis (28:45):

I think really the idea that AI can almost function as a little mini tutor in the sense that if you use it properly, you ask it the right questions, there's no fear of stupid questions. So you can really get exploratory in what you want to ask it, and it will never judge you. It'll always give you an answer. And even sometimes the AI can take what you're asking it and suggest an answer or suggest a next question, which I found interesting.

Jessie Moore (29:18):

Related to that, I really appreciate the reminder that AI can support learning without replacing the mental activity of actually doing the learning. So it's not taking over the activity of learning from the student, but it can help them process difficult information or test their understanding. And I think that that's a really helpful take on how we're thinking about ai. Other things that caught your attention in the conversation?

Nolan Schultheis (29:50):

Yeah, really the usefulness of AI based on the topic I found interesting. You can't really use it to help you all that much. You can't really use chat GBT to solve the practical aspects of a math problem like Dr. Trocki was saying. But in the writing portion of the assignment he gave, it was much easier to use the AI in an application. And I just think that's also interesting. There's certain things that no matter how much technology we throw at, it still needs to have a human behind it to be able to verify if it's right or not.

Jessie Moore (30:28):

And as Alyssa reminded us, there are humans who are programming the ai. And so it is still human element at this point, but as it is learning, it's learning on, as you highlighted, a range of information. So part of using AI effectively is figuring out is it giving you correct information or is it not? So again, reinforcing that idea that you still have to understand what you're asking it to do and what you're looking for to be able to assess if it's giving you good information back. But we certainly heard some ways that it could be helpful. And one of the ones that stood out to me is thinking about teachers using it to start their lesson plans and then going in and updating them for their specific teaching context. And I think we'll see that in more and more professions, which also highlights why we need to address AI in higher education and think through how to teach students how to use it ethically and effectively. Once again, I'm Jessie Moore.

Nolan Schultheis (31:40):

And I'm Nolan Schultheis. Thank you for joining us for Making College Worth It from Elon University's Center for Engaged Learning.

Jessie Moore (31:47):

To learn more about using AI and assessments of student learning, see our show notes at www.centerforengagedlearning.org. Subscribe to our show wherever you listen to podcasts for more strategies on making college worth it.