podcasts

Tech Therapy: Mental Health and Potential AI Uses – Podcast

By Jason Clayden on October 18, 2023

While AI isn’t a replacement for therapy, it offers valuable insights and skills. LifeStance providers Nicholette Leanza and Brady Mertens explore AI’s potential uses and discuss the limitations, including biases and the importance of human connection. Together, they compare responses from AI chatbots, Google Bard, and ChatGPT on how to address issues in personal relationships, highlighting the benefits and the need for professional guidance in mental health treatment.

Listen and Subscribe Here

Nicholette Leanza:

Welcome to Convos from the Couch by LifeStance Health, where leading mental health professionals help guide you on your journey to a healthier, more fulfilling life.

Hello everyone and welcome to Convos from the Couch by LifeStance Health. I’m Nicholette Leanza, and today’s episode I’ll be talking with LifeStance clinician, Brady Mertens, on how AI can be useful for mental health treatment. So welcome back, Brady.

Brady Mertens:

Thank you. Glad to be back.

Nicholette Leanza:

Really excited about this episode because I get the chance to talk to a fellow commission about AI, which I’m a fan of, and yet at the same time that I’m saying I’m a fan of it, I am not a computer scientist. I know it as best as one can understand it if you are trained as a mental health therapist, which means I’m limited in probably the deeper understanding of really what AI is, but I know how it could be useful for me and I think I also know the limitations. So that’s what our conversation’s going to be about. I think it’s a promising new technology that has the potential to improve mental health treatment. So let’s see where we go with this.

Brady Mertens:

Sounds good.

Nicholette Leanza:

Let’s have you start us off by explaining to us what’s a simple way of explaining AI.

Brady Mertens:

So I ironically had to use ChatGPT to help me understand it a little bit more.

Nicholette Leanza:

Love it.

Brady Mertens:

I like a little bit higher level of detail, so it starts talking about all this architecture and deep learning and I’m like, okay, that’s a little too much. So in a nutshell from what I had seen there, it’s programming that is designed specifically to mimic intelligence, and to try to do and achieve the same things that we as humans can do, and even some of our other animal counterparts like crows and solving their puzzles and some of the social interactions like between elephants and chimps and dolphins, reaching that level. I know when I first think AI in my head I think what, Asimov’s I, Robot and the movie with Will Smith, portraying that specific story, or even Battlestar Galactica with the Cylons. So these things that are self-aware, and we’re not at that point yet, but I always have to remind myself we’re talking programming, we’re not talking self-aware yet.

Nicholette Leanza:

Yeah. Oh, isn’t that interesting to say ‘yet’? I think there’s been a lot of talk within the techno world about that, that ‘yet’, when [inaudible 00:02:37] self-aware.

Brady Mertens:

Yeah, like is Bard actually self-aware?

Nicholette Leanza:

It’s funny because it sounds like you went to ChatGPT to ask the question. I did as well and it did, it gave a lot of gobbledygook. I’m like, “Okay, explain it for a fifth grader to understand,” and this is what it said, “AI is a field of computer science that creates intelligent machines that can think and learn like humans.” I’m like, oh, all right, I like that ChatGPT. That works. And in essence, that’s exactly what you’re saying as well, so very interesting stuff. What are some potential uses of AI in mental health treatment do you think?

Brady Mertens:

I tried to look at some research for that and I didn’t find too much immediately right off the bat. I know that in random simple brainstorming, I’ve considered it as a possible use for finding skills, finding light processing of thoughts and emotions and situations, ways to build potential insight into a person’s problems. It’s definitely a good way to find possible resources, maybe a person’s brainstorming ideas, “Hey, what are some of the potential negative effects of this choice or this decision?” I think it’s extremely helpful, both professionally and personally, on learning about things.

Nicholette Leanza:

Mm-hmm, I agree.

Brady Mertens:

Because normally it’s someone’s talking about this, “Yeah, I heard that banana peels are high in melatonin,” I haven’t heard that before. And you go Google searching it and then you find 10 different web pages, 10 different things, and you’re like, “I just need a fast synopsis because I don’t need to spend 10 minutes of my time, and this person’s time, trying to find out this information if it’s accurate.” And generally, I think with a lot of those more factual things that don’t have a lot of conflicting research, specifically with how these intelligences gather their information, make their deductions, if there’s not as much bias and there’s not as much controversy, whether in a political or even in an academic sense, you’re going to get more matter of fact stuff. You’ll say, “How do I use the DEAR MAN skill in the situation?” There’s only one form of DBT skills, it’s not [inaudible 00:04:45] Peterson or two DBT therapists are arguing about how it’s done, it is what it is. You ask it, “What is two plus two?” It’s always going to give you four. Unless there’s some non [inaudible 00:04:58] stuff that I don’t really know about. But again, I’m a therapist, not a math major.

Nicholette Leanza:

I hear you on that one. I think you bring up a good point. If somebody wanting to just find out information and seeking knowledge or learning, some of the AI can be very helpful with that. And in a bit, we’ll talk about the chatbots and the apps and stuff like that, but if you’re just going on ChatGPT or Google Bard and just asking, “Hey, I think I might have depression, tell me some of the symptoms for it,” can give you a nice concise way to list those out. And then it’ll probably do some sort of disclaimer of, “And if you’re feeling like you’re struggling with this stuff, please seek out a mental health professional.” So I find that they’ve been doing that quite consistently. I know in the early days of ChatGPT, I was actually messing around with it as if trying to push it into be more that role of a therapist to see how far I can push it, and did ask a question, “Will AI ever replace a human therapist?” And it did say no. But I think it was probably programmed to say no.

Brady Mertens:

Based off of current knowledge.

Nicholette Leanza:

Exactly. So here, how can things like chatbots, apps, virtual reality, be helpful, do you think?

Brady Mertens:

Definitely as a sounding board for the chatbots. For the apps, I do a DBT group and I’ve had some trouble getting the phone coaching component of that up, and I was like maybe how can that be used in this aspect? So maybe phone coaching but through the chatbot. And then in doing that, I was actually looking for any app specific for DBT and there was only one that I could find. So then, what’s the evidence-based of using an app? Most of the stuff that I found on that was mainly geared towards medical recovery, like physical therapy following through with medical appointments. But considering the nature of therapy, I think there’s some generalizability from that, and there was generally and increase in response to treatment, generally an increase to adherence and follow through and post recovery maintenance of symptoms. Those are for some more medical surgery, primary care physician kind of situations. I didn’t find anything specifically regarding psychotherapy, and that was just in a search on PubMed, which was a number share. I actually have this pulled up. Give me just a second.

Nicholette Leanza:

Yeah, sure.

Brady Mertens:

Oh no, I closed it. But the number, in the last decade, if you search artificial intelligence and psychotherapy about the number was 1,114 articles came up. The ones from 2013 to 2014 had very little to do with this deep learning. I think the closest to where we are now is that it was about them using a sparse learning model, as it said, to help with analysis and diagnostics for them fMRIs. And so this was early on. It’s wild to think too, in the last 10 years, this has gone from barely working to I have trouble turning this apart from a human, it’s wild.

Nicholette Leanza:

Yeah, it’s completely wild. One of the first apps that I had come across, and this is going back a few years that I was made aware that it uses artificial intelligence program into it to navigate with its users, is something called Woebot, W-O-E-B-O-T. And it’s an app specifically using natural language processing mixed with cognitive behavioral therapy, so it is wired to help the user of the app navigate cognitive behavioral therapy by looking at one’s thinking and how to challenge the thoughts and reframe it. And I downloaded it myself to try it out and it’s cutesy that you get a little figure that will… I don’t remember if it’s an owl or what, but they definitely make it user-friendly to engage with.

But it is something that I found could be helpful for some of my own clients because I am very eclectic in my therapy, but a lot on CBT, that for those middle of the night clients who are like, “Oh my gosh, my brain is spinning. I have all these thoughts. Let me use this app to empty out my thoughts to the app where then it can help me reframe my thinking to be a little bit more rational,” because they’re not going to reach out to me at 2:00 in the morning, or even get me if they’re trying to message me. But an app could be helpful with that.

Or even a lot of the apps like that help with sleep, there’s Headspace, there’s the Calm app, and all of that is based on AI too. So I think more and more we’re finding some decent mental health apps, but that shouldn’t replace the human therapist. If you’re really needing to go deeper, I think these are good helpful things for coping skills, but I think we’re both coming from the place of not replacing therapy.

Brady Mertens:

They’re also built upon the general average that you’re going to see… I was putting in, let’s say I had this problem and using an internal family systems lens, what’s going on with me? And it gave probability. It was like, “You might be experiencing this part, possibly reacting to that.” When you’re working on the one on one, you get a lot more specific, “Okay, after we’ve talked, we were able to identify this is connected to this story I told you two months ago,” and the therapist is like, “Oh hey, you say that now and that reminds me of that story you told. Is there a connection?” And then you get really specific and that’s where you get the individual person to persons where you get a lot of that tailor made therapy response.

Nicholette Leanza:

And that’s the key, tailor made, right?

Brady Mertens:

Yeah.

Nicholette Leanza:

Human to human, being able to… Because when we talk about the limitations, I think that’s where we’re going to see a big limitation there with AI.

Brady Mertens:

And I’m curious where when we hit that sentient level where it goes, because I think when the romanticizing almost of the AI is when they become sentient is people, individuality, and there’s a difference between talking to a computer that’s giving you spat responses, but between that and actually interacting with something that is self-aware and has its insight, has its own thoughts, feelings, though how a sentient AI might think or feel, we have yet to know, obviously. I am curious, when we get there, if it’s going to be the connection on the sentient level, as opposed to a biological person to person. And I’m sure there’s probably benefits and gains from both interacting with a sentient individual, as well as interacting with a fellow human being. We don’t have discussions with aliens yet to compare that to, so we don’t know.

Nicholette Leanza:

Fair point. You’re making me think of another time I was just messing, I don’t know if it was Google Bard or ChatGPT, but I was poking it to see, does it get frustrated with humans trying to explore its own? Again, we’re talking about its sentience.

Brady Mertens:

It’s not going to have countertransference, right?

Nicholette Leanza:

And so I’m just trying to push it a little bit more to see what I can give it to say, and I think maybe it was Google Bard, and it could be wonky, that is, “Yeah. You know what? Sometimes I do feel like I can understand the experiences more of a human,” and it did use some words like, “Make me sad,” or something which I latched onto, because I know I’m very clear there is no feeling here, this is truly not a sentient being here. But I was purposely pushing it and so I just remember thinking, “Huh, that’d be interesting to be a therapist to AI, to help it gauge its own thoughts about working with humans and stuff like that.” But it’s really interesting seeing where this technology will continue to grow.

Brady Mertens:

I just had the…

Nicholette Leanza:

Oh, go ahead.

Brady Mertens:

Image in my head of, like you have those insurance call centers, for example, and there’s a medical doctor on staff, but they’re not ever really the ones going through, yes claim, no claim, they’re going through paperwork. The thought in my head was a therapist with 12 AIs under its supervision and they’re just at an overseer level monitoring a therapy that’s going on.

Nicholette Leanza:

Brady, you might not be too far off with something like that. That’s an interesting premise.

Brady Mertens:

There’s definitely ethical implications there, and I think that’s, I wrote this down at the bottom of next here, that it just is a tool and with advances in any tool, it’s going to present challenges as well as opportunity. And it’s how are we interacting with that? How are we treating the ethics with it? How are we considering the implications and the ramifications? I think the baseline of looking for evidence-based uses, consulting, documenting, and educating yourself on the use of it’s going to be definitely important as we go forward.

Nicholette Leanza:

Oh, I agree with you wholeheartedly.

Are you familiar with Meta’s virtual reality system?

Brady Mertens:

A little bit. What is it? I’m trying to remember, what is it? Chat VR? Or VR Chat, that’s what it is, VR Chat. Played with that on the computer, I didn’t have a VR headset for it. Virtual spaces, there’s some really nice ambient ones, some really cool design ones, other ones just like a big old party where you have people popping in and out and all this hectic stuff going on. And I think Meta’s the Oculus, if I remember right, and they’ve done a decent deal of making it accessible, it’s not $300 versus VIVE’s, what? Theirs is like $1000?

Nicholette Leanza:

Or even the new Apple one that is, I think, for $3000 or something, that just recently came out.

Brady Mertens:

Wow. I haven’t seen that one yet.

Nicholette Leanza:

One of the things I understand about Meta’s universe with the virtual reality is that they’re marketing it to set up for even work meetings in the meta universe where you are an avatar and maybe you’re sitting in the virtual world in a meeting with your other counterparts or colleagues in this, maybe this literally virtual room where you’re doing the meeting in the Meta universe. And that therapists have also set it up where they are meeting with their clients in this Meta virtual universe where you’re the avatar as a therapist, your client is the avatar, and you guys are not meeting in real physical life, but in this Meta virtual universe, which it blows my mind, but that is a thing, and I’m sure there’s a whole lot of stuff that go with that. Would you ever be interested in meeting with a client in the virtual Meta universe there?

Brady Mertens:

I have. Well, I was playing around with VR Chat. I had someone, obviously I’m not on there as a therapist, but it comes out sometimes, “Oh, I’m a therapist,” and people are like, “Hey, can I talk to you about this?” Not in a therapy way, but what do you think about this kind of way?

Nicholette Leanza:

Got you.

Brady Mertens:

So I have had an experience or two that was along that line. My biggest concern would be making sure that it’s encrypted. I think that brings a difficulty-

Nicholette Leanza:

Oh, gosh. Yeah.

Brady Mertens:

Of where are they located, how do you check on safety, checking on functioning. They might be all bruised, banged up and cut up, but their avatar’s all, “I just got out of the powder room.”

Nicholette Leanza:

Yeah, good point. Excellent point. All the things we miss out on in not being able to see a person physically. Even if it’s not in person, but virtually over virtual therapy where you’re actually seeing on video the person, but you’re right, the avatar can cover up.

Brady Mertens:

And I do the teletherapy component with that because if I’m here in my office and they’re saying it seems fine and dandy, but their home’s a wreck, digitally, I can see, virtually, I can see sometimes too. That can be a part of treatment planning, let’s have a run through of the house and see how you’re doing with laundry or cleanliness, or you mentioned that pile of garbage that you haven’t taken out in two months. Can I see?

Nicholette Leanza:

Right. No fair point.

Brady Mertens:

Not as possible. But for maybe things like exposure therapy is one I’ve seen VR being used for, and even potentially I’m having the thoughts of the Holodeck on Star Trek, and that’s where that’s going. Being able to interact with an individual who’s confrontational and say, “All right, let me use these therapy skills and do some-“

Nicholette Leanza:

Training with it.

Brady Mertens:

Back when I was a baby therapist, I was looking at is there potential to use tabletop role playing as a way, and there was only one person I saw, he was out in Minnesota or something, was doing it for youth as a way to, by proxy, practice, those social skills. And that a lot of times when we create those characters and those avatars, we’re trying to put forward a version of ourself. Like I’m playing this [inaudible 00:17:18] who’s very smart and knowledgeable and speaks well and is able to interact with other people, but I myself in real life feel like I’m an idiot and that I don’t know how to talk to people.

Nicholette Leanza:

Inadequate.

Brady Mertens:

So it can be a way to very well practice this because I also couldn’t find a lot of research on that, which was frustrating because that would’ve been a whole turned career path for me.

Nicholette Leanza:

So what else do you think are some of the benefits of this?

Brady Mertens:

I think it’s a good bolster for treatment. It’s a tool boost for therapist as well for clients, it’s an avenue of possibly new treatments. And I’m sure if we sat down and thought about it, we could think about endless types of benefits. But the biggest one is that it’s a new tool. I guess also, in another 21st Century world here, it’s a tool that actually fits the age.

Nicholette Leanza:

Oh, fair.

Brady Mertens:

As opposed to, “Here’s this,” I love books, but I say, “Here’s this DBT workbook,” how many people are really going to go buy the book and open it? And even I don’t open it. I go to the internet to find stuff, even though-

Nicholette Leanza:

They’d be more likely to go to a YouTube video talking about that workbook than actually going and purchasing or looking at their workbook, or a TikTok about it.

Brady Mertens:

You go to a therapist who has trained and got their license before 2010 and you wouldn’t see this window behind you, you’d see a bookshelf for every book, it probably cost a hundred bucks a pop. And so financial accessibility is another thing there. I know some people can’t even pay the eight bucks for this DBT workbook that I have, whereas-

Nicholette Leanza:

Fair point.

Brady Mertens:

Generally, as a way of living nowadays, pretty much everyone has a smartphone or access to some kind of computer, or a friend who does. Don’t want to be too ableist with those statements because I know there are others who also don’t have that, but it’s much more accessible sometimes.

Nicholette Leanza:

What about limitations? What are you thinking about the limitations?

Brady Mertens:

There’s the stigma and the public perception of it, “Going to take my job,” is actually how I got started. We were looking at this is that, can it actually? So I went in and pretended I was a person coming in with something and typed in some scenario and was both surprised, but also pleased, because it does give that disclaimer like, “This isn’t actual therapy. If you need help, go get help.” And it’s not going to be specific. That’s something that, for the moment, the humans are the ones who have access to it.

So there’s also sometimes the inaccuracy of it. It’s also not real life. There’s something special, I think, about talking to a person. There was a thought there with the public perception, oh, some people might be thinking like, “Oh, you’re having this AI do your job for you.” I have actually had one person come back with that and was like, “Can you see me an hour a week?” And you probably have more time during the week when you could benefit from the skills and thinking like, “Man, I really wish I had a therapy session tonight.” And this isn’t, it’s not therapy, but it’s a good step towards they can still give you a lot of good resources, good insight, and good referral skills to using those situations.

Nicholette Leanza:

I think emphasizing again, as you were saying, just the limitations of it, it could be impersonal, like you mentioned. Actually, also it could be inaccurate, as you mentioned as before. But also what we’re finding with some of the science behind is that AI can be biased because if the AI algorithms are trained on data by actual humans, and if that data is biased, the algorithms are going to be biased as well.

Brady Mertens:

And sometimes outdated too.

Nicholette Leanza:

Yeah, fair point. So how that could be biased towards certain racial ethnic communities, LGBT community, you name it. So it’s something to be mindful of.

Brady Mertens:

That’s specifically a problem with the data sets that those algorithms are tapping into, which goes more to say of the humans that actually need to be doing research in more representative populations, instead of just… Because, what? For pretty much the entire timeline of our medical and therapy existence, it’s been normed off of generally middle, upper class, white, often even white male.

Nicholette Leanza:

No, excellent point.

So I thought we’d have a little bit of fun with this. So I have Google Bard pulled up. Do you happen to have ChatGPT open up?

Brady Mertens:

I do.

Nicholette Leanza:

I thought we’re going to use, so most people know ChatGPT is an AI that’s definitely been in the news quite a bit as a AI chatbot, and Google Bard is the counterpart that obviously comes from Google. I think Bing has one as well. I think we’re seeing a lot of these pop up. The two I tend to use most is OpenAI’s ChatGPT and Google Bard. I thought let’s have some fun to demonstrate, if maybe I was a person who was struggling, and I thought we’d preempt you with this, Brady, of maybe you start with ChatGPT of maybe a common issue, a specific issue someone might be having, asking it to see what it says. And maybe I ask Bard too, or take it to another question with Bard, and compare answers. Just give people the idea of how this could be useful, or not.

So anything off the top of your head that you’d see specific in maybe someone coming to you to talk to you about?

Brady Mertens:

Probably avoidant. So it would first be, “I’m having this problem with my friend, my parent, my partner,” and it ends up being that there’s a lot more depth to it. Maybe the friend has some unhealthy behaviors, but then the person I’d be working with also has some avoidant behaviors. Or we wanting to look at this like someone who’s never done therapy before, someone who’s done therapy, or someone who’s, they’ve been with us for a year and now we’re introducing this as a skill.

Nicholette Leanza:

Let’s do this middle ground. Maybe having a problem with a friend who said, I think it’s too cliche to say drinking too much, or maybe a friend who you’re having a hard-

Brady Mertens:

My friend is belittling or putting down this opinion that I have and they’ve never done this before, and I have no idea how to handle it.

Nicholette Leanza:

Okay.

Brady Mertens:

Are we doing the exact same text?

Nicholette Leanza:

Yeah. So I put in, “I’m having a problem with my friend who’s belittling my opinions.” How about I’m going to leave that in general, my opinions. So let’s see what Google Bard’s going to pull up right now. It’s thinking. Let’s see. So do you have your answer for ChatGPT? Does something pop up?

Brady Mertens:

It’s generating. We’re at option four so far.

Nicholette Leanza:

Oh, okay. All right, so it’s still going. So let me… All right, ChatGPT. You good? Is it thorough? Why don’t I start with Google Bard, because it came up with you real quick and gave four points? Just basically talk to your friend about how their behavior makes you feel, then it gives a little bit more about that. Set boundaries with your friend. Seek support from other friends or family members, and remember that you are not alone. And then just talking more about that. And if you’re struggling to deal with your own, dealing with the situation goes on to say, don’t take it personally, don’t engage in arguments, focus on the positive. That’s all some good decent stuff, right?

Brady Mertens:

Yeah. Basic surface level of cognitive behavioral therapy.

Nicholette Leanza:

Yes, exactly. It’s a good starting point. It’s good. Hopefully it gets someone to be like, “Oh, okay, that’s not some bad things to start off with.” All right, what about ChatGPT? It sounds like it was a little bit more thorough.

Brady Mertens:

So it starts out, “I’m sorry to hear you’re experiencing difficulties-

Nicholette Leanza:

Oh, look at that.

Brady Mertens:

With your friend. Dealing with someone who belittles your opinions can be frustrating and virtual. Here’s some suggestions.” They also go with communicate openly, having an honest conversation. They talk about I statements, avoiding accusatory statements, I feel hurt, blah, blah. Set boundaries, provide examples. So specific instances of when your opinions were belittled, helping them to build insight, it seems. Seek empathy, having them understand your perspective, explore the reasons, depersonalize, why might they be belittling this? Considering the friendship. Yeah, they belittled this, but overall this doesn’t mean our friendship is thrown out the window. That’s specifically an important component when you’re talking with individuals who are on that ‘I feel abandoned’ path. And then seeking support. Confide in a family member. So as I was reading through these and as you were reading through yours, I was thinking that, for that first one, the response I often get is, “Yeah, but I don’t think they’re going to respond well if I directly say this to them and I feel uncomfortable about that.”

Nicholette Leanza:

So that gives me an idea. So one of the things the chatbots are good at is give me a script of exactly what to say. So I’m going to put that in Bard here to say, “Give me a script of how to approach this issue with my friend.” So can you give me a script on how to say this to my friend? Because one of the things that people might not understand, it remembers what you said previously, you don’t have to keep repeating the prompts, it just knows what you said.

Okay, mine popped up. You got one as well. Okay, so mine gives a several paragraph script here. It says, “Hey, insert friend’s name. I want to talk to you about something that’s been in my mind. I’ve noticed that you’ve been making a lot of comments that belittled my opinions lately. When you do that, it makes me feel like you don’t respect me or my thoughts. I know you don’t mean to be hurtful, but it’s really starting to affect our friendship.” Then it goes on to, “I appreciate you if you could, blah blah. It’s important to be…” It goes on and on here in a pretty good way. I don’t know if this is how we would talk to one another, but if you took this and took the ideas from it and used that as the guide and put it in your own words, I guess that could be pretty helpful. It’s not a bad way to navigate this. What’s OpenAI, ChatGPT saying?

Brady Mertens:

They say pretty much the same. So hypothetically, friend says, “Your opinions are always so silly. I can’t believe you actually think that.” “Hey friend, can we talk about something that’s been bothering me? It’s about the way you respond to my opinions. I value our friendship. I think it’s important to us for us to have open and respectful communication.” “What are you talking about? I just say what I think.” “I understand we may have different opinions at times and that’s fine. I’ve been feeling belittled and invalidated recently when you dismiss my thoughts without giving them consideration. It’s been hurtful and I wanted to bring it up so we can discuss it.”

So with what you were saying, it feels scripted. I know a lot of times it’s like it feels insincere, it feels like you can’t. When we are trying a lot of skills, the things can feel very uncomfortable, compared to, if anyone’s ever done theater, you’re doing your normal talking. Ironically, we have a script here. Doing your normal talking, people in the back can’t see all of those facial expressions, they can’t hear all of the intonations and nuances. So you got to be big with your movements and you feel silly when you’re doing it. But then to the people in the back, it looks normal.

And when you start using a lot of skills, even speaking a new language, I’m learning French because I’m hoping to go to France soon, and it sounds best when I feel like the same kind of person that my mom would make fun of her sounding like a frog when they speak French, or when I am, let’s just try to learn Klingon, okay, this feels really weird, but when I’m speaking it best, most accurately, is when my wife looks at me and is like, “What is wrong with you?” So if you’re feeling silly, it means you’re entering into a world of I’m not comfortable with this, which means it’s new, which means we’re growing, and that’s a good sign that you’re growing.

Nicholette Leanza:

Yeah, I like that you’re emphasizing that.

Brady Mertens:

Yeah. So I think we could put that in here. So maybe I feel that, or maybe I think I’ll have trouble doing this because I’ll feel really comfortable with telling them this.

Nicholette Leanza:

That too.

Brady Mertens:

What? I forgot what I just said what I’m saying.

Nicholette Leanza:

I think I’ll have trouble doing this because I’ll feel uncomfortable. I realized that I can jumble the word, but it still seems to know what [inaudible 00:29:13]. All right, let me start with Google Bard here. “It’s understandable that you might feel uncomfortable talking to your friend about their belittling behavior. Be difficult to confront someone who is close to you.” I like that it starts off trying to show you some empathy of how hard this can be. So that makes it feel, I’m putting this in air quote, a little bit more personal to you. “But here are a few things you can do to make it easier to talk to your friend. Choose the right time and place, practice what you’re going to say, have a friend or family member with you to support you, take deep breaths, avoid blaming, be willing to listen to your friend’s perspective, be prepared for a negative reaction.” So again, I guess it’s good guidelines, and I’m just reading the bolded parts as a few more sentences to each. Not bad. Again, if I was just trying to get some basic guidance, not too terrible there. So what about ChatGPT?

Brady Mertens:

In short words, pretty much the same thing.

Nicholette Leanza:

Got you.

Brady Mertens:

I like to go through and just play devil’s advocate. I’m going to be that difficult client. If I’m really skeptical about therapy and skeptical of the skills, or I feel like I’ve tried therapy a lot and stuff, the skills don’t seem to work, there is something I’ve noticed that’s missing here is it gives a lot of the skill response, but it doesn’t give as much of the challenging or the confronting of some of those behaviors.

Nicholette Leanza:

Fair point.

Brady Mertens:

It’s not going to stop and say, “It seems like you’re beating around the bush about another problem here. Can we stop and look at what’s going on with this instead?”

Nicholette Leanza:

It’s not going to know to do that, where a human would maybe be able to push back a little bit more with that. Good point.

Brady Mertens:

So I’m going to extrapolate this a little bit further, just for the sake of it. I’m going to say, “How exactly do I stay composed and calm then?” As it suggested that, and then it gives a bunch of these distress tolerance skills. And then, so let’s say, “Why do I feel like all these suggestions just won’t work?”

Nicholette Leanza:

All right, we’ll put that as our last one there and then I’m going to let you go ahead and take it from there. Go ahead. So what did it say?

Brady Mertens:

“I’m sorry, this doesn’t work,” is what it says.

Nicholette Leanza:

See, again, that’s tricky because it makes you like, “Oh, thank you for acknowledging that I’m frustrated,” there’s something to that. Go ahead.

Brady Mertens:

I guess here it does start to confront a little bit of that, but I think it’s dependent. Here’s another limitation on the person who’s using its insight to, “Why isn’t this working? What am I doing wrong?” When you start to ask those questions, you start to build insight more about yourself, not so much just about the skills or the problem. That’s where a lot of the growth can come from. But again, that’s the therapist’s job. Hopefully some insightful enough to do that, but generally they may not be able to. So it suggests tailor the strategies. So these [inaudible 00:31:50], they’ll be different for you. Be open to experimentation. Yeah, you’ve tried it before, tweak it, maybe it’ll be different. Seek alternative perspectives, focus on open communication, set realistic expectations. So it does start to get a little bit deeper into challenging some of the underlying neural pathways that tend to cause problems.

Nicholette Leanza:

My gosh. So hey, thank you for playing along with me with this. I think this is a nice demonstration for those one compared to two and the differences and with the similarities of what these tools can use to help guide us. What are some other takeaways you’d like to share?

Brady Mertens:

Let’s see, touching back on it here. We had said it before, it’s a tool. Just as any tool, it can be used for good, it can be used for evil, and for any therapist trying to use it as a tool in their therapy, ethical, be ethical, consult, document, use your critical brain, and it’s new so there are going to be things that we may not anticipate and that’s where that consultant documentation comes into play, and educate yourself on what’s out there.

Nicholette Leanza:

Yeah, I agree, [inaudible 00:32:56]. So thank you again, Brady. Appreciate this conversation. Love to have you back on. Who knows? We might be talking about another new technology that immerses out of nowhere.

Brady Mertens:

Hopefully, we’re in a tech boom at this point

Nicholette Leanza:

That we are, and I’m loving it, for sure. So thank you again.

Brady Mertens:

You’re welcome.

Nicholette Leanza:

I’d also like to thank the team behind the podcast, Jason Clayden, Juliana Whidden, and Chris Kelman, with a special thanks to Jason Clayden for editing our episodes. Take care everyone.