ChangED

Beyond Hallucination: Critical Thinking in an AI-Powered World

Andrew Kuhn & Patrice Semicek Season 2 Episode 41

What did you think of the episode? Send us a text!

What happens when artificial intelligence stops being a buzzword and becomes an actual classroom partner? The possibilities are both exhilarating and terrifying. In our conversation with educational technology expert Dr. Brian Housand, we explore the transformative potential of AI in education when approached thoughtfully and strategically.

Most educators are stuck in two extremes: either banning AI outright or allowing unchecked use without guidance. Housand offers a refreshing middle path by reframing AI as a "thought partner" rather than just another search tool. This shift in perspective opens up powerful opportunities for collaboration, especially for gifted students who might struggle with traditional peer interactions but thrive when bouncing ideas off an AI system.

The heart of successful AI integration lies in emphasizing process over product. When teachers evaluate only final submissions without engaging with students throughout their creative journey, they inadvertently encourage AI misuse. Housand suggests a counterintuitive approach: teachers should test their own assignments in AI tools first, then use those generic outputs as starting points for classroom discussions. 

Perhaps most valuable is Housand's CAPES framework for evaluating information authenticity: Credentials, Accuracy, Purpose, Emotion, and Support. This systematic approach helps students engage critically with all content—whether human or AI-generated—in an era where information moves at lightning speed and verification becomes increasingly challenging.

Ready to transform your relationship with educational technology? Listen now to discover practical strategies for harnessing AI's potential while teaching the critical thinking skills students need for a future where artificial intelligence is simply part of the landscape. As Housand reminds us, "This technology is not going to go away. It's going to continue to get stronger and more advanced"—our responsibility is to ensure we're prepared to use it wisely.

Want to learn more about ChangED? Check out our website at: learn.mciu.org/changed

Speaker 2:

Let's get weird.

Speaker 1:

That's my number one that's going to be my opener.

Speaker 2:

when we do the opener, let's get weird. I love it.

Speaker 1:

Welcome back to Change Ed the national. What am I going to?

Speaker 3:

say it's a podcast, this is a podcast.

Speaker 1:

Yeah, no, no, no, I want to say the national lighthouse for educational podcasts. Wow, wow, even our guest. I'm no longer your host, andrew Kuhn.

Speaker 3:

Thank goodness it only took Brian Howsam coming for us to lose Andrew Kuhn. Education consultant for Montgomery County Intermediate Unit, and here with me is Patrice Semicek, still an educational consultant from the Montgomery County Intermediate Unit, and our guest is someone who is going to talk to us a little bit.

Speaker 1:

Dive into not only AI, but also.

Speaker 3:

We went to the best session. It was so good.

Speaker 1:

It was so good. It was really good. We were hearing about your perspective on AI and you had a lot of great information but also examples. I really appreciated that you could model as use. So you weren't necessarily saying here's the tool to use, but it was more like here's how you can use a tool. I'm wondering if you could share with us a little bit about your thought process, how you got to that spot and why you chose modeling in your session when you were talking about AI.

Speaker 2:

Sure, well, thank you for having me and I'm so glad that you had a good, comforted experience.

Speaker 2:

You know it's it's interesting kind of having these conversations about AI or really technology in general. When I first kind of started out doing this type of work as an educational consultant, I started talking a lot about technology in a variety of different ways. It was, you know, the early 2000s, and everybody was trying to figure out what they could do with Google, and so there were so many people within that space that were really kind of talking about the tools without actually showing how the tools work. There's also, I think, a lot of people that fall in love with one particular tool rather than really focusing on, like, the thinking that goes into that tool. The tools are going to change, like you know, almost every day at this point, but I think that as long as we have an understanding of how the thinking works with the tool, then we're going to be able to carry that a lot further along the way. So exactly the same type of thing happened when people first started picking up ChatGPT back, you know, in the distant year of 2023, 22, like early 23.

Speaker 3:

It feels forever ago Like 2022.

Speaker 1:

Way back. It feels like it's been a long time.

Speaker 2:

It's not been very long at all and people wanted to talk about, like, the theory behind it as opposed to what can we do with it, and I think that the quicker that we can just sort of jump into that deep end. Try some things out, give, give educators some ideas of here's how it can be useful for me. Then there's going to be much more buy-in, at least from my perspective.

Speaker 3:

I really appreciated how, in your session, you started with the why Like. Why is this even important? Why is this even something we should be spending our time in? And then you listed like three or four different ways we should be thinking, because you used AI to help answer this one question, which was fabulous, and then you got into okay, well, now let me show you which was the other thing that I think Andrew mentioned too, showing them that it's not this big, scary tool that could possibly take over their jobs or remove something from them. Instead, it's an addition to what they're doing.

Speaker 3:

And then you like demonstrated okay, well, how can I use this in the classroom? I think that's a really powerful way of getting people to understand, to get over the barrier of AI. Is this new tool that's gonna potentially kind of really shake things up, and the could potentially kind of really shake things up. The other thing we have a lot of really interesting conversations, andrew and I is kids are using it anyway, so we need to help them use their power for good instead of using it for whatever the nefarious reasons they're using outside of school or even within school. Right, like so many nefarious reasons.

Speaker 3:

So, especially with gifted kids. If we're going to be honest, if you only use your power for good, it'd be amazing. So I think the way that you presented it was fabulous. So have you, in terms of cause you've been presenting for a while now, right Like I've seen you at a few conferences, have you, like, read anything or figured out? How did you figure out how to present things in that, in that way?

Speaker 2:

You know, I always look for what is the good story. You know, I think that every everything that we do in life really relates to you know what, what is the narrative on that? And you know, when structuring a lesson, when constructing a presentation, a workshop, whatever it is, I always look for what's the story thread. You know, and you really kind of think of it from from that story structure. My first year is in English, so I have an understanding of you know how a play works.

Speaker 1:

So here's how you develop a novel.

Speaker 2:

So my first job you know, because I was an English major and I graduated and that qualified me to do so many things. So my first job post-college was I started out as an assistant manager of an independent video store in the Atlanta metro area. That was like mid 90s. So we were like blockbuster competitor.

Speaker 3:

Yes, yeah.

Speaker 2:

All of the stories that are coming to your mind, all true.

Speaker 1:

Be kind Rewind yeah.

Speaker 2:

Every single one of them. And so, yeah, I mean, I just watched a whole lot of film. You know that was pre-streaming and you just really had the opportunity really to make that movie, the movie story, like your library, and really kind of thinking about what are people going to remember? We're going to be those kind of those sticking points and how do you create enough of that compelling story that they want to be a part of that? So, yeah, that's, I think, just what goes into a good, any good lesson, anything that that happens within the classroom needs to have a good story to go along with it.

Speaker 3:

So how did you use AI? Or did you use AI to help you craft the story?

Speaker 2:

Yeah, so in thinking through that, so like within this presentation which this was like totally the first draft or that particular framework I was thinking more of it through using Kaplan's thinking like a disciplinarian framework, and I wanted to provide a few different perspectives.

Speaker 2:

And the three that I kind of came up with is that we were that we were thinking like a philosopher, we were questioning like a scientist and we were creating like an artist, and so by really kind of latching or attaching to those three points, then it provided a lot of leeway to really build things out for each of those points. Then then I asked AI, for example, how does one begin to start thinking like a philosopher? What are, you know, what are five things that I could do in order to think like a philosopher? And you know, give me, you know, some really good ideas from there. That was kind of the general outline. And then, using those points that I thought, oh well, how can we build out or how can I, how can I give more specific examples of what that looks like? And using AI really as that thought partner to help take what it was that I was already thinking and annotate it in a way maybe that I wasn't thinking about.

Speaker 3:

I love that. I love that Andrew and I talk a lot about AI being a co-pilot not a pilot or a backseat driver, but like being a co pilot and so being able to use any tool to enhance is a fabulous idea. Thank you for explaining it like that.

Speaker 2:

Yeah, you're welcome. I mean, I think you know, the real worry, I think, of most teachers is that their students are going to use it, use AI, as sort of that. Oh, I'm going to Google the answer, like, oh, I'm going to take my existing prompt or assignment, they're going to put it in there and then they're going to copy paste and submit you know the answer. That's, for me, a real problem. The problem for me around that comes with this, this overemphasis on the final product versus the process that we go through. Like, we only check on our students when they submit that final product and we aren't with them on that journey of going through the messiness and awkwardness of the creative process. Then, yeah, they're totally going to. You know, there's the real potential that they could just, you know, ask AI and submit that.

Speaker 2:

As I've been talking more and more about AI, that I've really tried to emphasize to educators at all levels is the importance of taking pretty much every one of your existing project assignments and just submitting it to all the AI tools that you can get access to, Just to see what it is that they come up with, so that you know like, hey, here's what is going to happen, here's what a product might look like if I just submitted it as is. I'm really kind of fond of the idea of using that for your students as a starting point. Go ahead and saying like hey, I went ahead and submitted the assignment to AI for you. This is what it said. Tell me why this project pretty much sucks at this point, right? How can we use that as the starting point and how can you improve on it to make it better? How can you personalize that so that it's representative of the things that you're interested in?

Speaker 3:

I think too.

Speaker 2:

Or is it that you want to go next?

Speaker 3:

Yeah To your point. I think if we don't talk about AI in our current classrooms or we sweep it under the rug, that's when they're more likely to submit something and say here it's done. And then you see all those words that you showed us that are showing up. So we this is TMI, I guess we wrote a proposal, we stuck it into Gemini to say make it better, like make it fluffier, and there was at least three of the words that you shared in there and we were like nope, can't have that word in there. That's very obvious.

Speaker 3:

Ai. Like it was actually really. It was really good to test it out and like kind of validated exactly what you said in that study. But I think if teachers ignore it, they're going to get those cookie cutter. I put it into AI and figured it out, kind of situation. But if they do exactly what you said or if they say like okay, we know this is what's going to happen, I need you to write something, put it in, make it better, and then you revise it, use it as a revision tool, I think we're going to see a lot of really cool things and I like how using what you suggested allows kids to figure out how to prompt AI better too, to give them the answers that they're really looking for.

Speaker 2:

Yeah, I mean, you know, for me AI has that potential to be that thought partner that you can bounce those ideas back and forth. And for many of our gifted kids they do not necessarily like to collaborate with others because they've had some really bad experiences with that, but I think with AI then they have this potential to, you know, have some really interesting conversations. Granted, they should also be having human to human conversations and not just human to AI conversations, but I think it affords them some new opportunities to to think about things in ways maybe that they haven't been willing to do before.

Speaker 1:

Brian, one thing that you were talking about that I really appreciated and wanted to just take a minute to talk about was that you know, we all rely on previous experiences to help us understand something that's new. What can we attach it to, what can we look at? But I think one of the shortcomings of us doing that is that we're Google-izing AI and we're saying, oh, let me go there and get the answer and come back. You know what does it look like to authenticate that answer? Right? How do we consider all of these things that are part of it? And what I love about what you're talking about is how is this now a dialogue? And one of the things that I saw that was so powerful just to see it come to life in your session was the speed with which you can get that information back.

Speaker 1:

And now it's all encoded, so the information comes to you so fast you can make these decisions. But the part that humanizes it is based off our previous experiences. We could have got there right, but it would have taken so much longer. And you're like, oh yes, and that makes me think of the next thing, so it actually pushes us to the next level. We can stay at a different level of thinking and processing versus, you know, kind of ebbing and flowing. We can get to that spot, like you said, with this thought partner. When we can get to that spot, like you said, with this thought partner.

Speaker 1:

When I interviewed for this, for the job that I'm at, I actually said I don't do anything in isolation, I'm always talking to other people. And now you can develop this thought partner that actually eventually understands kind of your perspective or what are you asking by what you're not asking? And, right, we're only at the cusp of where this can go and the potential of it. So I again my, the thing that I took away was just how quick it could happen. And even your, you give us five minutes in our session to talk about different ideas we could do with something, and then you're like, well, let's just punch it in here.

Speaker 1:

Boom, we had the ideas we came up with in the room, plus you know easily 15 more that just you know instantaneously. But then we could take that even further where, instead of our entire session with you being about this one part, you beautifully and masterfully demonstrated for us that power and how we can work with AI, not just looking for an answer, but we knew where we wanted to go. We didn't necessarily have it all mapped out of how we were going to get there, but we knew where we wanted to go. And I loved how you were saying you know the journey can be messy, and so we also kind of demonstrated that like, oh, okay, let's, let's lean over and do this example or let's look more into this, but that instantaneous, not just fast to be fast, but to help push us along and take us further along the journey.

Speaker 2:

Thank you for just giving me all the fuels and, you know, thank you for the positive feedback. I I'm just going to put you like on, like repeat.

Speaker 2:

Anytime, anytime, call us for a hit like on, like repeat, anytime, anytime.

Speaker 2:

So, yeah, I think that you know, until people see that, those possibilities, then they just don't even know, like they don't know what they don't know.

Speaker 2:

And then once you open up that door, then they can start saying, oh well, if it could do that, I wonder if it could do this Also. You know, in kind of thinking through you know, this portion of the conversation, I'm sort of reminded of that quote from Arthur C Clarke, who says something to the effect of any sufficiently advanced technology is indistinguishable from magic. And indeed, the first few hundred times that I saw ChatGPT or Gemini or any of the AI large language models produce some, you know, relevant content, it felt like magic just because it happens so quickly. No matter what it is that you ask AI to come up with, then it's going to, you know, it's going to produce something that's going to fit exactly what it is that you're asking for. That feels just really new and fresh and I still I mean, you know, we're, you know, almost two years into this and and it feels incredibly exciting still- Very good point.

Speaker 1:

While we've we've touched on this idea of how do we authenticate the information that's coming through, and we even talked about, you know, google-izing, ai and so forth venturing into it in a different way, one of the things that struck me was that you talked about becoming a super critical consumer of information and, if I'm not mistaken which I never am, brian you used the analogy of CAPES to kind of help individuals work through that, and would you mind talking about that a little bit and sharing with us? I'd love for our listeners to have something they could at least look to or start to grab on to, as to what does it look like to figure out if this is fake news or if this is something genuine?

Speaker 2:

In the session we're really kind of talking about the importance of being a critical consumer. This isn't necessarily a new conversation. Ai just presents perhaps some new challenges for us in authenticating that information that we're accessing Back probably 2018 or so. I read a book called Fighting Fake News teaching kids the importance of critical thinking in a digital age. It was really built on a lot of work that I did when I was in grad school at the University of Connecticut back in the mid 2000s during the aughts. So in addition to working within gifted education with Joe Renzulli, salary Stills, eekley, all the others that are part of the NEAG Center, now the Renzulli Center I also went and played with the ed tech playground folks over there. So right around that time Don Liu was a professor at that's LEU. Don was a professor at UConn, really focusing on new literacies, because what they were really kind of finding out in those early 2000s is that as students were navigate search engines, they were also having to learn this set of new literacies in order to be better internet searchers. One of the studies that his team, the new literacies research team, was kind of probably famous for was the whole Pacific Northwest tree octopus website and that students went and found the tree octopus website but then they thought that the information that they found there was a hundred percent accurate. I was like back 2005, 2006,. It's now 2024, 20 years later, almost, or having exactly the same conversations, like because AI can very quickly produce this information doesn't necessarily mean that it is correct. So to really kind of battle that in that fighting fake news book I created a framework that I called CAPES, so that what we want to do is when we're looking at that information and this works mostly with more kind of like news media outlet type of information versus what AI is producing. So the CAPES is an acronym, so that each letter stands for something, a different kind of way of thinking. So the C is for credentials. Are we going to be able to find out who is saying this information? What makes them an expert on that particular topic? The A stands for accuracy. Can we look and determine how true or not true that is? Can we compare that with another source? Can we triangulate that data? P stands for the purpose. Is this really the purpose meant to persuade? Is it meant to inform? Is it meant to entertain or is it trying to sell us something? So kind of using that PI's framework persuade, inform, entertain or sell. That purpose is really important.

Speaker 2:

When, again, when we're looking for that information, particularly coming from online news organizations, the E is emotion. As I looked at other critical thinking frameworks and really thinking about information that we find online, the emotion portion was really left out of that. How does that information factoid website, story, youtube clip, whatever it is how does it make us feel? Does it make us feel angry? Why are we angry about that? Does it make us feel sad? How are they playing with our emotions and are we checking those emotions before we're, you know, looking like retweet or re-X or whatever it is that we say? Now? The last piece is support, which really kind of goes back to that triangulation of data. Who else is saying this and how do we? Are we verifying that this information is really correct as a part of that?

Speaker 2:

So the CAPES, I think, works somewhat with AI, but you know, honestly I think that we probably need a whole is that there is this real concern that AI is going to hallucinate or just make up information. Probably one of the first times that I talked about AI, I was doing a workshop in Texas and somebody had posed that question of like can we trust it? And, almost without thinking like, my gut reaction was like, do you trust me? Like, do you trust me? And she's like well, yeah, I'm like why. I could totally be making all of this up right now. I could be speculating left and right about what is the right or wrong answer. She's like but I trust you. You're like a speaker, you have a microphone, you're standing in front of a group of people, it's like, but you should still question what it is that I'm saying yeah, yeah, yeah.

Speaker 2:

You don't just need to accept it. Yeah, like you got to fact check it. And that fact checking, I think, can be a lot of work. People hallucinate a lot, especially at the rate in which it's part of the human nature. We make things up when we don't know.

Speaker 3:

Yeah, especially at the rate in which we're inundated with information, like there's no escaping information of all kinds around us. Yeah, yeah, we just have to be careful with that?

Speaker 2:

Yeah, Not that we need to, you know. Question absolutely everything, but we just need to critically consume and think about how far are we willing to trust that. Yeah, and it's going to vary, and you know it's in every situation it's going to be a little bit different.

Speaker 1:

There's so much more quick to trust something that we can see versus something that we can't see, but it doesn't always. You can't always judge a book by its cover, as you were saying. I don't know that Brian Housen is necessarily saying all things that are truthful right now.

Speaker 2:

I'm going to have to backtrack this podcast. That's right. I could be completely lying to you right now, but it's, it's fascinating because we I think we just have we have to be careful and we have to. You have to believe in someone, something, otherwise you would never get anything accomplished. And I think that it's really someone, something, otherwise you would never get anything accomplished.

Speaker 2:

And I think that it's really our prior experiences, our knowledge base and really, I think, carefully examining what our biases are and how our biases are influencing the way that we're thinking and who it is that we believe or that we don't believe. And it's a complicated minefield that we have to navigate and, you know, to teach our learners, our kids, how to navigate that minefield is, it's a Herculean task, to say the very least.

Speaker 3:

Yes, yes, I would definitely agree.

Speaker 1:

Brian, one of our longstanding traditions on this World War Now podcast. Very popular in North Carolina, by the way.

Speaker 3:

It will be now.

Speaker 1:

Yes, is that we like to give our guests the second to last final thought. Do you have things that you'd like to share, that maybe you didn't get a chance to say, or that you'd like to circle back to and really emphasize?

Speaker 3:

Andrew likes to give the second to last because he needs to have the last word.

Speaker 1:

This is going to be challenging. This time, I'm not going to lie.

Speaker 2:

I would say for my second to the last thought, as it relates to AI, is to be curious, to try things out with AI. Don't be so quick to judge when you haven't asked it questions yourself that you have to be willing to jump in that deep end to see what it's capable of. If we automatically discount it or try and shut it down or ban it before we even have an understanding of what it can and can't do, then we've already lost. This technology is not going to go away. It's going to continue to get stronger and more advanced and it has a tremendous amount of potential. I think that it can be a really beneficial tool for us. It's not going to solve all our problems. It's also not going to end the world as we know it. Yeah, at least today.

Speaker 1:

Tune back into the podcast to learn more about that. Well, obviously you've seen my notes because you said everything I was going to say, brian, so now I've got to come up with new material on the fly. We really appreciate coming on the show and sharing your time, your wealth of knowledge and even your experience. There were two things that really well. One thing that really there were a lot of things that really stuck out to me Two, then one, then wow, now a lot.

Speaker 1:

Wow, one of the things that really stuck out to me when you were talking about your own presentation, how you prepare it, was you said what are people going to remember?

Speaker 1:

This truly believe this is one of those moments in history where people will look back and they will remember how we handled it, what we were doing, how we our apprehension about it, but also the great opportunities that were in front of us, and and even look back and say, well, we had no idea this was coming, this, this opened up a whole new can of worms or made a whole new world of possibilities, and what I appreciate about that is is, you know, people will also remember our own individual approaches about with it and as educators, we have this additional responsibility of you know we we can elicit fear or we can elicit excitement or caution.

Speaker 1:

You know there's a lot of things that we do, not just by what we say but by what we actually do. So I felt actually very encouraged by that that people are going to remember our actions. They're going to remember how we talk about things and what we do with it and for all of our listeners, the one thing that I really liked in your conversation about CAPES is CAPES is at least a spot that we can start and enter into this conversation and start to empower ourselves and our students. The thing that I want to remind all of our listeners is to follow the airlines and first put on your own CAPES before you help students put on their CAPES. Wow.

Speaker 2:

Yeah, I've been working on that for 20 minutes.

Speaker 3:

That was really well played. Wow, that was a lot.