Making College "Worth It"

Incorporating Artificial Intelligence into Engineering

Episode Summary

In this episode, we explore the technical, ethical, and social complexities of using AI in engineering. We speak with Dr. Blake Hament, Assistant Professor of Engineering at Elon University, who shares an example of developing a voice-enabled robotic guide dog in close collaboration with members of the visually-impaired community.

Episode Notes

See our full episode notes at https://www.centerforengagedlearning.org/incorporating-artificial-intelligence-into-engineering/

In this episode, we explore the technical, ethical, and social complexities of using AI in engineering. We speak with Dr. Blake Hament, Assistant Professor of Engineering at Elon University, who shares an example of developing a voice-enabled robotic guide dog in close collaboration with members of the visually-impaired community. Our conversation also examines the long history of AI in engineering, illustrating that GenAI is an updated application of a longstanding technology.

This episode is co-hosted by Jessie L. Moore, Director of Elon University’s Center for Engaged Learning, and Nolan Schultheis, a third-year student at Elon University, studying Psychology with an interest in law. Nolan Schultheis also edited the episode. 

Episode art was created by Nolan Schultheis and Jennie Goforth. 

Funky Percussions is by Denys Kyshchuk (@audiocoffeemusic) – https://www.audiocoffee.net/. Soft Beat is by ComaStudio. 

Making College “Worth It” is produced by Elon University’s Center for Engaged Learning

Episode Transcription

Nolan Schultheis (00:05):

Welcome to Making College Worth It, the show that examines engaged learning activities that increase the value of college experiences.

Jessie L. Moore (00:12):

In each episode, we show research from Elon University Center for Engaged Learning and our international network of scholars. We explore engaged learning activities that recent college graduates associate with their financial and time commitment to college being worthwhile.

Nolan Schultheis (00:27):

I'm Nolan Schultheis, a third year student at Elon University, studying psychology with an interest in law. I'm the Center for Engaged Learning's podcast producer and a legal profession scholar.

Jessie L. Moore (00:37):

And I'm Jessie Moore, director of Elon's Center for Engaged Learning and a professor of Professional Writing and rhetoric.

Nolan Schultheis (00:43):

In this episode, we'll explore the potential use of artificial intelligence to create robotic service dogs. We'll talk with Blake Hament, an assistant professor of engineering at Elon University, and we'll learn how his use of AI and research informs his approach to AI and teaching and learning. Let's meet our guest.

Blake Hament (01:00):

Hi, my name's Blake Hament. I'm an assistant professor of engineering at Elon University. This is my third year at Elon and as a faculty member, I've had a bit of a twisting path through stem. Initially I had some amazing teachers back in middle and high school that connected stem to my existing interests. So I was really into skateboarding and into rock bands and making music in my garage with my friends. And I had a math teacher that connected music to math and I had a physics teacher that connected skateboarding to physics and laws of motion. And that was so impactful and it was I think really cool the way it was introduced to me and that I could use my passion to fuel this interest in STEM and I could get more out of what I already cared about. And there was less of learning these dry equations and tools that weren't connected to anything.

(01:56):

And so I really try to hold onto that philosophy as a STEM educator. I studied physics as an undergrad at Duke University and I was lucky enough to be involved in undergraduate research. I was doing high energy particle physics research and I got to spend some time onsite at cern, the European Center for Nuclear Research really enjoyed it. It was a very small part of the huge team that verified the Higgs bos on and in popular science that was being called the God particle back in the early 2010s. And this is the particle that mediates gravity and it was really exciting. It was really fun, but I was often just in a dark basement windowless working on a computer and a lot of the physics I was working on, I really couldn't have conversations with many people about. And so it felt a little bit isolating.

(02:50):

And I took a break after undergrad and I did Teach for America wonderful program. Maybe not for everybody, but I had a very positive experience, challenging experience, but positive. They defer your student loans, they pay for your master's. And I started coaching robotics during that time and really enjoyed how I could apply math and physics to robotics. But all these applied problems presented themselves that were very much something that everyone could be excited about and something I could communicate and be a part of larger teams. And so after that experience, I went back to grad school and I did a PhD in robotics. And at the time, and even today, there aren't many formal robotics programs. So usually you join a mechanical engineering or an electrical engineering or computer science department. And I joined a mechanical engineering department. I very much to this day love analyzing and understanding how things move and goes back to trying to model the physics of skateboarding tricks when I was 13 and 14.

(03:56):

And there's also a field called controls and control engineering. And so once you understand motion or now we will talk about it in a little bit with reinforcement learning and some of these new techniques, maybe you don't even have to understand that deeply, but controlling a system to move or act or behave in a certain way and doing mathematical proofs to prove that your technique is going to do it in a safe and reliable way really fell in love with that. And that's been my career so far. I've had the chance to work with some big companies doing robotics and automation projects for them, and I love the theoretical, but I also love the real world and the applications and hopefully helping people and improving society. And not just saying that as a marketing tagline but really doing it.

Nolan Schultheis (04:41):

Please briefly share why you became interested in using artificial intelligence in your research.

Blake Hament (04:46):

Artificial intelligence is very much a loaded term today. If it's okay with y'all, I think I'll just go right into kind of how roboticists think about this term ai, and in many ways it feels like it's been hijacked from the way we've used it in the academic community. It's very much a marketing term these days, and a lot of times I think you could just replace AI with computers or technology and you're not really adding or gaining any meaning. There was a big AI bubble in the eighties. So my research mentor during my PhD was super adverse to the term AI and really didn't want to touch it, felt that it muddled the water and was kind of a word for conmen and snake oil salesmen. And so I was very much trained to, if anything, use just technical terms for what we were doing.

(05:36):

And I did find during my PhD that if people were using the word ai, it meant either they couldn't or they wouldn't describe the actual math or algorithms they were using and it was a little bit sketchy. We're seeing this convergence where now we do have these products called LLMs that are on the market and they've got everybody talking about AI and using this term, but I really try to talk more specifically about what the technology is and what that means. And robotics had this holy grail there still does. Since Isaac Asimov, so much of what we do is inspired by science fiction as well as nature. Nature has the best engineering. There's this holy grail of general artificial intelligence, which would be a digital consciousness or truly capturing what makes us intelligent. This may be an impossible goal. We don't even understand neuroscience enough that I think we could, we're nowhere near making responsible claims that we are doing general artificial intelligence.

(06:36):

We don't even understand what intelligence means for us for the most part. So there was a big pivot, especially after the eighties and all of this overhype and over promise and investors and program officers didn't want to touch AI anymore because they'd been overpromised and underdelivered. So there's this pivot from general artificial intelligence to specific artificial intelligences and these have really grown into their own in the last few decades. I've done a lot with computer vision where we can take images from cameras, but even other specialized sensors like lidar and hyperspectral multispectral cameras, and we can process the data coming through these pixels and we can do things like classify and identify different objects or segment the image into whatever objects are present. We can map spaces and localize ourselves within those spaces. So that would be an example of a specific intelligence. I would like to use the example of a squirrel as well.

(07:37):

A squirrel is doing all of these really amazing things where in a split second it's scanning its environment and identifying branches that are within being distance and then it's mapping and planning a path and then it's expertly actuating its muscles to launch itself in this projectile motion and it's accounting for the springiness of the branch and then it's landing on the branch and it's doing this over and over again very quickly. There's pieces of that that we're starting to be able to do with technology, but overall that is an incredible confluence of all these specific intelligences that are very impressive, very complex. And the new prevailing theory is that we'll never get to general artificial intelligence without all of these specific intelligences being developed much further. So when we talk about ai, I kind of am falling back a lot to this framework of general versus specific and probably it's snake oil if someone's telling you that there's any type of general artificial intelligence, but there are these really cool specific intelligences being developed and improved.

(08:42):

LLMs are like its own topic there, but some of those specific intelligences like speech to text, text to speech that's going to come up when we talk about this AI module for a robotic service dog today, those are important, but I would advise the public and lay people to not be taken in by this idea of a digital consciousness. LLMs are this very nice tool for finding highly probable relationships between words and ideas, but just because there's a high probability that things are connected doesn't necessarily mean that they are. And there might not be logic, there might not be reasoning. This might just be artifacts of the data that it was trained on, but it's stochastic parrot is a fancy name that I like that people use for it, but kind of a probability machine of predicting what's most probable based on existing words or sentences that you're prompting and that it's seen in its dataset on training.

Jessie L. Moore (09:42):

I appreciate the way that you, and even in your early background and connecting to physics and math, looking for the connections and in the example of a squirrel giving us a really concrete example of what you're describing. I also appreciate the reminder that while tempting to think about AI as something new, it's not, and we are seeing new iterations of it perhaps, but that it has a longer history than we sometimes focus on in national conversation about higher education Right now.

Nolan Schultheis (10:22):

I really like the point you made of the differentiation in how roboticists view the term of AI and the general public because I am starting to kind of notice the same thing you had just said about snake oil and marketing. They're throwing ai, those two words specifically, those two letters at almost every single platform app. I mean there's AI plugged into my email app now. I mean it's, they're trying to throw it in everything because it's just such a foreign concept to people. But again, tying into that, the whole rebranding of the term I used to call n video games, the computer, the CPU, I used to call that ai. I was like, oh, I'm fighting the ai. And now I feel like if I were to use that term, people will get confused. So it's funny how even the vocabulary around the word is changing as well.

Jessie L. Moore (11:17):

So you've written an article about an artificial intelligence voice module for a robotics service doc. Could you give our listeners a brief introduction to that particular AI project?

Blake Hament (11:29):

Absolutely. I had the opportunity to work with Quaded robots a few times before this, so those would be four legged robots. And the most immediate association to make is a dog. And so it wasn't too far of a leap to come to this idea of a robotic service dog and specifically a scene eye dog. And I had the fortune to meet Hillary Inger who's a local member of the visually impaired community, and she's a wonderful artist and dancer. I was doing another project with robotics integrated with dance and musical theater, and that's how we met. And she introduced me to Vision Insight, which is a support group for members of the visually impaired community in Durham, founded and led by Ed Rizzuto. Really amazing people and often engineers. We can be very naive or egotistic and we say, oh, well we can make this thing and we go and we do make this thing, but it might not be what's best for the community or society.

(12:35):

And so it's been very important to me to stay in dialogue with this community and kind of present ideas and prototypes and focus groups and better yet listen for their ideas and see if those can lead to prototypes. And so that's the impetus for this project is a robotic scene eye dog. And quickly this could spiral into a very big complicated problem. So we're really focused at this stage on crossing intersections. That's the big goal, doing that safely. I've learned a lot about the existing tools and methods that members of the visually impaired community use to do this. And early on I was hearing some feedback that there's these very long wait lists for traditional service animals. Usually they're provided for free, but there's nonprofits that are paying the cost of training and it can be a very low success rate for training. There's a lot of animals that attempt the training but don't make it through.

(13:31):

Travel can be very difficult. There might not be a place for the animal to relieve themselves. There's certain areas that just don't allow animals regardless. And there's this really heavy responsibility of caring for a living thing that some people don't want to take on. Or a lot of the members of our focus group have had service animals that either have passed away or retired, and that's huge emotional drain as well. Something that was very emotional for me was learning that these animals, when you approach a crosswalk with a service dog, ultimately the dog will try to prompt the user whether it's safe or not, but then the user is the one who makes the final decision whether or not to cross. And at that point, if it's the wrong decision, the dog is trained to actually put their body in between them and the vehicle and sacrifice themselves. So a lot of motivations to where maybe a robotic service dog could be useful. And I love dogs, they're super special. Some strong motivations for potentially useful robotic seen eye dog.

Jessie L. Moore (14:39):

I love that and I appreciate both the human motivations for addressing a need that is widespread and then also the attentiveness to maybe taking some dogs out of harm's way as well. That resonates with me as well as a fellow dog lover. But that's so much complexity in that project, and as we think about the example you gave of the squirrel of their thinking about all the pathways and the trajectories, it feels like it might be 10 times a hundred times more complex when we're starting to think about a robotic dog that would respond in the ways that a highly trained service dog might.

Blake Hament (15:25):

Absolutely. And the framework of specific intelligence is helpful in this case of breaking out what are all the specific intelligences that a traditional service dog is capable of. And we have this great opportunity in academia where we can work on problems. There's a lot of freedom to work on whatever we're motivated to. We're not motivated with incentive or a super strict timeline at Elon. We're also not a huge R one university where I have 10 graduate students and a lot of funding to do this. So it's important for me to kind of chunk this problem into smaller pieces that I can work on for a handful of semesters with one or two students and make some good progress. So in this case, this chunk that we're talking about is just the voice module and how cool would it be to have a full English conversation with your dog?

(16:22):

Dogs of course are doing so many things for us, and there's a lot of emotional and support and social attributes that at this stage, those are specific intelligences I'm not working on. I think they could be very important and valuable, but really just thinking about at an intersection right now, the dog is either pulling into the intersection or standing firm sitting, trying to keep the person out of the intersection. But what if you could actually ask the dog, what do you see or what's happening in the intersection? And they could give very specific, very quantitative data like, oh, there's a truck coming such and such miles per hour and that gives you such and such time to cross the crosswalk, which is so wide. Or maybe all of that is just a quick go or no go, but you could have this conversation and get that detail. That's where we're headed and that's what is motivating this. I

Nolan Schultheis (17:17):

Like the application of AI you're using specifically in relation to trying to provide accessibility to people. And I like that you took it from a socio-cultural lens and that you looked at other factors that could influence people not being able to get a service animal. And I think that this concept and this idea is probably the way that we should be applying AI to technology, but I don't, like you had mentioned in academia, there's no profit incentive, but I don't know if in the greater scheme of companies and the world if people would be willing to forego their greed for the betterment of others. But also that was just something I wanted to say. What you were mentioning earlier actually tells in perfectly to a question I had for you in relation to the dog's behavior. So do you think that the technology presently available for the dog will properly emulate a dog's behavior? And what strategies will you implement to make the dog's responses feel more natural and or empathetic?

Blake Hament (18:27):

So the short answer is it's a whole area of research that I really haven't touched and haven't developed a lot of expertise in. I think the best way forward would be for someone with a more technical background like myself to partner with an expert on dog behavior. And reinforcement learning is a branch of what we would call AI now that I think would be very useful for this. So identifying ways to capture a dog's behavior, whether that's through video, I mean that tends to be the prevailing way now, and we can track down to the skeleton of the dog. What are the exact movements and behaviors that tends to have a huge effect with adoption of robots is we have all of the subconscious programming running and there's this phenomenon called the uncanny valley where we're actually pretty comfortable with a cartoonish robot or artificial being.

(19:23):

But as it gets closer to a realistic person or dog, there's something subconscious that alerts us that there's something wrong. This is not quite right, not quite normal, and it gives us this very negative uncanny feeling. So little things like having this robot twitch its ears and wag its head or its hindquarters in ways that are realistic would go a long way towards making people feel more comfortable. And we've seen robots that are being demoed in public places be attacked. There's very strong and somewhat valid reasons to be skeptical of new technology and AI and robotics. And so I can understand how that could be channeled, imagining that somebody is depending on this device or robot for accessibility, we want everyone to feel comfortable in public with this device. So the more we can recreate realistic physical features and motions, I think will be very helpful.

(20:19):

I wanted to mention, I think a lot of what your question is leading to is when you talk about natural or empathetic, there is kind of a relationship going on too. And we have seen the beginnings of some really promising robotic works. Parro, PARO is a therapeutic robotic seal, seal like the animal, and it's been very popular with especially autistic children. So having this robot that is somewhat predictable and very safe and comfortable to interact with and to practice different social interactions with can be very helpful for some people in our population. I could also see arguments why it might be detrimental to replace interaction with each other with more robots and AI and things like that. So it's definitely a very careful balance or fine line that we're all going to have to walk and explore here.

Nolan Schultheis (21:17):

So I have a follow-up question actually. I know that animals give off pheromones and hormones and there's a chemical element to the relationship between human and animal. Do you think that there's going to be a way that you guys can navigate around that or do you think that that's just something that's going to have to be part of the difference between natural versus robotic?

Blake Hament (21:41):

That is such a cool and insightful idea, not something that had occurred to me before, but you're absolutely right. I know. And then I'm sure, yeah, between animals, probably even with plants, we're doing all types of very subtle chemical exchanges. And another subset of AI would be machine learning. And this is again, not new, but I guess starting to become super widespread. Back when I was doing high energy particle physics research, we were looking at a ton of data and the big buzzword of the day, actually not even yet, but soon to follow the big buzzword was big data. And so a lot of these techniques are decades old now from big data to machine learning to ai, but they're tried and true, and we're seeing some cool work with especially smell and taste with analyzing what molecular components contribute to certain stimuli or sensations and then recreating those. So like 3D printing smells. And I could imagine something like that for these pheromones and hormones and maybe that would be artificially deployed on a robot.

Jessie L. Moore (22:54):

I also appreciate that you're nuance discussion of the ethics and the implications of these types of choices because as we are thinking about if we don't want to put a robotic dog at risk, maybe we don't want the pheromones because we may not want other real dogs to feel threatened by a new dog in their environment. But then it's just an interesting dimension to think about. So

Nolan Schultheis (23:23):

You had touched on this a little bit, but what role do you think AI will have in technological advancement moving forward? And do you think AI will be involved in almost all areas of technology?

Blake Hament (23:35):

Yeah, so I guess I think we've hit on a lot of these points already, but to summarize, it's largely a marketing term at this point in time, and we're likely in an AI bubble. I've been reading a history of financial speculation devil take the hind most, and there's been so many historical parallels, the South Sea Company railroad companies, the internet, we have new technologies that do promise lots of good things for society and will certainly revolutionize society, but we tend to way overhype them. And then there's both an economic and psychological kind of depression that comes after we realize we recalibrate, and it's kind of a longer term journey with these new technologies. We've had machine learning, early forms of natural language processing that gave us LLMs reinforcement learning. These have been around in their infancy for decades, and certainly the military and some specialized industries have deployed these and been using them for decades as well.

(24:44):

But we're now at this point where they're being presented as this very wide commercial product. And LLM specifically, I think we're coming, we will have a bit of a hangover with LLM soon. Computer vision often called machine vision is maybe the quieter revolution where it was very expensive and very specialized applications, but now it's much more accessible. And so it's for better or worse, it's in most factories, it's on your phone, it's in the cloud being applied to security camera footage of all Americans with flock. So it's here to stay. And again, machine learning, big data, probability based analysis of huge data sets, it's just a part of life. And yeah, it's going to be part of everything. I like to talk about it in terms of those specific technologies. As far as I know, we're nowhere near a digital consciousness or general intelligence. I've got a fun story that I like to tell about the first robot, and this was in Europe in I believe the 17th century toured for about 50 years, and it was called the Mechanical Turk, and it was this mechanical man that sat at a chessboard and it would challenge people to play chess, and there'd be these big expositions where someone would sit down and play chess against the robot and usually the robot would win.

(26:20):

And they did a really good job of presenting this in a very spectacular way and capturing the public imagination. And most people went to the exposition and came away thinking, wow, this is an amazing robot. And it wasn't for about a hundred years afterwards, and even in many parts of the world, it's still not common knowledge, but it came out that actually there was a man inside the contraption, a magnet underneath the chessboard, moving the chess pieces. I see so many parallels today with a lot of ai, and I've worked with robots that have claimed to have a digital consciousness. And once you actually see into the code, it's very scripted conversation and scripted motions or some little bit of probabilistic variation added in, but this is not general artificial intelligence, but people are for sure going to keep marketing it as such. And so I think there's a healthy amount of skepticism we should all hold in regards to that.

Jessie L. Moore (27:20):

And with the examples that you've shared so far and some of the discussion that you've already provided about just the range of things that we label AI and some of the cautions we need to have, I imagine that this next question we have for you may require a little bit of nuance, but I'll just ask it straightforward. How does your research with AI inform how you approach the use of AI in your teaching, and what types of AI are you integrating into your teaching if you are?

Blake Hament (27:54):

Yeah, it is exciting at the end of the day, and I feel some responsibility to inject some skepticism into the conversation. But it's exciting because courses that exist now called AI or robotics really don't do our students justice anymore. We need 3, 4, 5 courses in place of that to really get specific. And instead of just one AI course, we really need a natural language processing course, a machine learning course, reinforcement learning course, computer vision. So I'm starting to develop a reinforcement learning course at the moment, and I'm glad that there's so much more interest in that. Students want and need to be trained on the specifics of how all this works. I've tried to mention some historical analogs. I really see this as another piece of technology, and it can be used for good, it can be used for bad. It's certainly going to change society.

(28:53):

But there's all these historical parallels of people worrying about the collapse of society or moral failing of the new generation because of how this is going to change work and life for them, and it's probably never going to be that drastic. So I take some comfort from that and I try to share that with my students. We have seen already how quickly writing is changing with LLMs, but it's really come for all of academia now. There was a while where it felt safe, like, okay, they can't do super complicated math problems like senior level and graduate level math problems. And it's not that great at programming and there's still a lot of failings, but it's getting better all the time. And we're basically at the point where students can take a picture with their phone or screenshot a problem out of even a senior level course and get step-by-step instructions exactly how to do it.

(29:47):

And in a way that's good, right? It's making this education more accessible and democratic. So I try to, for the most part, embrace it. I think it's inevitable, but I also try to be realistic about, there could be students that try to take shortcuts. And so I think it's all the more important to explain, or not even just explain, but give students opportunities to do hands-on projects and applied projects so that they're doing work similar to what they would do in the workforce. And I coach students that it's a tool plus expertise. So this is a tool that can help you work a lot faster and more efficiently, but if you don't also have the expertise to know when it's wrong or when there's a better way to do things that is not being proposed, then you're not going to be able to wield this tool very well.

(30:40):

Working in engineering, there's a lot of work to be done on these tools as well. So that's been an interesting journey. I'm someone who really likes to understand how technology works and ran and rave and talk about it. And so knowing my audience, there's most people use a car and a computer every day, and they actually don't really understand deeply how those pieces of technology work. They know enough not to get in trouble. And so I have to sometimes take a step back and think about who's my audience right now when I'm talking about an AI or robotic technology? And maybe they don't really care to know that deeply how it works, but for engineering students, especially in some of these classes, they should, and that's what they're signing up for, and they may be involved in developing these tools further. So that is definitely informing my teaching.

Nolan Schultheis (31:31):

It's interesting for us having interviewed other people about AI so far, and the parallels you're actually drawing out of what they have said unknowingly. For example, you were talking about how AI isn't anywhere near a digital conscience yet, and we had explored this topic in the past, but not in the frame of if it is, but rather how does it act as the conscious? And we kind of came to the conclusion that AI seems to be a lot of a yes man currently, and that while it may seem like it's natural conversation, it's actually just pre-programmed responses that are just such a large query that it seems natural to you. And then I also liked the example you made about having to kind of change your assignments around. We had just spoken with someone who was looking at AI in a writing context, and she essentially said that that was the entirety of what was happening, was that she had to change the assignment structure. She had to change what she was looking for in the assignment just because of the very nature of AI cutting corners and really eliminating a large process of thought.

Jessie L. Moore (32:43):

And our listeners might not think of a lot of parallels between an English professor and an engineering professor other than they both start with ENG. But a lot of the things that you were saying were speaking my language as a professional writing instructor and as someone who studies engaged learning a lot is your comment about that. It's just another new technology and we see the fears and history and then we learn to work through it and we recognize that there's still room for the human, and we start to see the limitations. And to Nolan's point, we also start to think about what is it that we really want learners to focus on that maybe it's not in the case of one of our previous guests doing things that technology can and maybe should do for the learner, but engaging students in the critical thinking to evaluate the outputs that they're getting and the ways that they are using or not using technology and being prepared to talk about why. I also really appreciated, this has been a thread throughout our conversation, but your continued connections to broader context, which is a key practice for fostering engaged learning and mentioning that giving students authentic projects to work on so that they actually have that opportunity to not just learn the ideas but practice them in context. That just warms my heart. So thank you for prioritizing that. For your students,

Nolan Schultheis (34:19):

What would you like students to know about AI as they contemplate using it to support their learning?

Blake Hament (34:26):

So I think we've hit on a lot of these points, but just to reiterate them, AI is a great tool, but you still need to develop expertise in order to wield it. Well, I love using AI to code these days, but I really have built a very deep fundamental understanding of computer programming. And it often will suggest a 90% correct computer program based on my prompt, and that's great. It saves me an hour or two of drafting all of that, but there will be some key mistakes or things that it tries doing that I actually want to do it a little bit differently or I know a more efficient way to go about it. And so it's this wonderful tool, but I would be at a loss if I hadn't already developed that expertise to wield it. Engineers are building bridges, cars, planes. It's really important to not take shortcuts and to have this expertise.

(35:23):

And I'm sure there's parallels in every discipline. It's important to focus on authentic value and trust and building that in whatever you're doing. So I think those shortcuts, the consequences of your actions will catch up to you one way or another, positive or negative. So it's a tool. Develop the expertise, be strategic about how you use it, don't take shortcuts that are not good to take. And then the mechanical Turk is a good example to remember as you're introduced to new technologies and new pitches, there's a lot of motivations and incentives that might not be in line with the truth.

Jessie L. Moore (36:12):

Well, thank you so much for joining us today. I'm internally geeking out a little bit at some of the intersections, and I'll share one really quickly, but in professional writing, there has been a lot of thought and discussion about when writers partner with scientists where in the process that should happen to make sure that we're communicating ideas effectively, that the user experience that professional writers might have access to gets back to the engineers and other designers that the engineers can communicate some of the limitations, but also some of the opportunities afforded by whatever they're working on. And I really appreciate that throughout our discussion. You have repeatedly mentioned collaborations that really center the interdisciplinary work that can happen and that can help inform more critical ethical decisions as we're working in exciting new technologies with longstanding foundations and thinking about integrating them into higher education. So thanks for letting me nerd out a little bit about the intersections of our disciplines, and thank you for your attention to partnership and your work.

Blake Hament (37:33):

Absolutely. It's so fun, and I love being on a liberal arts campus that encourages that and where I get to do a lot of that, I'm not in a basement with no windows working on a computer by myself. It's good to be connected and in dialogue with everybody.

Jessie L. Moore (37:55):

So Nolan, what stood out to you in this conversation?

Nolan Schultheis (37:59):

There's a lot of interesting aspects, but what I would say is one of the more interesting ones was kind of the warning that AI isn't as advanced as we think it is, and that it's not nearly as kind of like a terminator view that we have of it where it can take over the world and make us all dull humans. It's a very useful technology, but at its current position, it doesn't have the capacity to completely outride what we do by ourselves.

Jessie L. Moore (38:33):

And within that context, I really appreciate his attention to partnering with people who bring additional knowledge to the context and additional expertise, whether it's understanding what kind of existing tools folks with visual disabilities are using and how a robotic dog might extend or attend to some of the challenges there, or whether it's thinking about bringing in someone who has expertise in dog behavior. If you want to make it seem more dog-like recognition that in many of these, what we might think of as wicked challenges or wicked tendencies, these complex problems that we're facing every day, they do require expertise from multiple disciplines and thinking about how to partner rather than be in competition. It's not a phrasing that Dr. Hament explicitly said, but I just really appreciate his intent and to partner with others to more holistically address challenges in authentic ways.

Nolan Schultheis (39:49):

His partnership and kind of communal way he's attacking this problem is very respectable, and it's probably one that I would recommend other people do no matter the context. He had said that there are certain sociocultural limitations to getting an actual service animal. And I think in a world where we're such a one track start to finish kind of society, it's better to go off the beaten path and think about other influencing factors. And I like that he was able to see those and factor them into how he was planning out the Robotic Dog project.

Jessie L. Moore (40:33):

Once again, I'm Jessie Moore.

Nolan Schultheis (40:35):

And I'm Nolan Schultheis. Thank you for joining us for Making College Worth It from Elon University's Center for Engaged Learning.

Jessie L. Moore (40:42):

To learn more about generative AI and engaged learning, see our show notes and other resources www.CenterForEngagedLearning.org. Subscribe to our show wherever you listen to podcasts for more strategies on making college worth it.