Casey Berglund:
There's a version of the artificial intelligence story that's all about hustle and hacks. This isn't that. Today we ask, what if AI isn't here to make you do more, but to help you remember who you are? My guest is Stephany Oliveros, a researcher at the intersection of AI and psychology, and she's a co-founder of SheAI, an education and research initiative supported by the United Nations, my friends, bringing women and diverse voices into the room where AI is being built. In this conversation, we explore how to use AI for introspection without outsourcing your inner authority. Why real creativity means going beyond the first output. The mental health upside and the risks of AI companionship. The very real ways bias shows up in models, from medicine to disaster response, and why women must shape the data. Practical ways you can get involved, stay ethical, and mostly stay you. It's grounded, hopeful, and surprisingly human. As Stephany says, knowledge is power, and that starts with knowing yourself.
What does it really take to create calm, purposeful success that feels good in your body? I'm Casey Berglund, TEDx speaker, author, coach, and your host of the Purpose Map podcast, brought to you by Worthy and Well.
Welcome back to the show. Before we dive into this fascinating conversation with Stephany, I have a favor to ask. Could you take a moment right now to follow the Purpose Map wherever you're listening right now? And also, if you want to take a little bit more time, rate and review the show. When you follow, rate, and review the Purpose Map, you help the information from these podcast episodes be heard by more people, and that creates a positive ripple. You also really help me bring you amazing guests and truly fund what we're doing here. So it's so, so appreciated, and I'm so grateful in advance for your help. So pause right now, do those things, and then come back here. And there's an amazing episode ahead that I hope you enjoy.
I've been so excited to speak with you in particular, because my experience of you is that you have this wide wealth of wisdom around AI, but not just AI—psychology, and then this embodied presence. You are someone I met really early on in being a member of Juno, and I felt immediately from you this warm-hearted presence and this compassionate, kind energy. And I feel like sometimes when I think about people who have a lot of expertise in the tech world or are really learning about AI, I maybe think about, A, a man, and B, someone who has a different sort of embodiment. And I feel like you have been someone, whether you have known it or not, who has really opened me into exploring various AI tools and thinking about how I can use them in an authentic way that keeps me in the center of valuing humanity. And I just think that you have so much wisdom on all of these intersections. I would love to ask you what it was that got you interested in AI and specifically AI and psychology, studying AI and psychology together in the first place.
Stephany Oliveros:
Well, Casey, first, it's an honor that you can say such beautiful things about me. When you used the word wisdom, my first thought is like, "No, really?" So far away from that. Yeah, the more I learn, the more I realize I know nothing. And that is something that concerns me a bit, but I think it's traditional in academia to realize that you are just scratching the tip of the iceberg.
Casey Berglund:
But to me, that actually is wisdom. You know, that is wisdom to say, the more I learn, the more I realize I don't know. Yeah. And to be able to own that and not pretend like you have all the answers, and especially when you're exploring fast-growing areas of research, I can imagine that it would be illuminated how much you don't know on a regular basis.
Stephany Oliveros:
Exactly. Exactly. It's like every conversation I have, every paper I read, every news article, it's like, "What?" Something new to figure out. But yeah, eventually you start processing. The thing with AI is, like, as soon as you start processing a concept or something, then it moves on to something new. So it is like racing against yourself sometimes. However, I thought it was an interesting thing to discover a few years ago, precisely seven years ago. I watched a documentary that I recommend. It's called AlphaGo. It's on YouTube.
Casey Berglund:
Alpha Girl?
Stephany Oliveros:
Alpha Go. Yeah. Okay. It's on YouTube. It's developed by a company called DeepMind. By the time, it was just DeepMind. Today, it is the AI hub for Google. So it's a part of Google that acquired DeepMind. And it was fascinating because they were exploring the role of AI and the potential that it might have for real use cases, because until pretty recently, it was a bit—not useless, but kind of like a forgotten field.
Casey Berglund:
Sort of fringe. Yes. Sort of like left out of the equation, like not that important. Okay.
Stephany Oliveros:
And then they started developing this AlphaGo model, which—there is a game called Go that is way more complex than chess. And they developed this model who could beat the world champion several times in a row, learning by doing, so like self-learning, let's say. And they were talking about how neuroscience played a role in the way that they were developing this, in creativity. Can machines imagine? Can machines think? And it's like, I started asking, like, well, can machines be creative? Can machines be conscious even? So I said, okay, there is definitely a link between psychology, neuroscience, and AI. And that is when I started to get interested. Wow.
Casey Berglund:
And so you've said something earlier about the minute you read something new about AI or start to explore it, there's something else. It's like you're chasing—you're always behind how fast it's evolving. Yeah. What have you learned recently that has kind of blown your mind?
Stephany Oliveros:
Super intelligence.
Casey Berglund:
Say more, please.
Stephany Oliveros:
So what we know right now as AI is, you know, like ChatGPT, for instance, or other models of language—large language models, they're called—they are pretty dumb compared to what's to come or what these companies are trying to achieve.
Casey Berglund:
They're pretty dumb. Yeah. Oh gosh.
Stephany Oliveros:
So for instance, GPT-5 was just released, and it's pretty good at many tasks, and it's a step forward towards something called artificial general intelligence. And what these companies want to achieve or scientists want to achieve with this is AI that is equally comparable to the smartest human on earth. That will be AGI, or artificial general intelligence. But what blew my mind was something that I believed was mostly something that we'd find in a sci-fi movie, which is super intelligence, which would be AI that is not only as smart as the cleverest of all humans, but just way beyond any human intelligence collectively combined. So that could lead us to discovery of eternal questions that have haunted our humanity, conflict resolution of geopolitical historical conflicts, discovering the secrets of, I don't know, black holes, space, the origin of the universe, cures of illnesses, like maybe true introspection. I don't know, the beginning of a new human brain cycle. Okay. So yeah, it's so sci-fi that it sounds a bit ridiculous to think of, but they are working towards that. Okay. And it's very likely that that will happen, and we all will live to see that.
Casey Berglund:
Wow. I'm a bit speechless. And in some ways, I think these types of conversations bring up a lot of fear for people, you know? And I'm just thinking about my listener—someone who either has been resisting the AI movement, like not engaging in using AI tools, even like ChatGPT. Yeah. And there's definitely folks where it's part of their day-to-day life. And I think probably most people are aware of ChatGPT. They might be exploring some other tools. Like I learned from you about Perplexity and Consensus, which have both become part of my—I wouldn't say daily use necessarily, but almost. Yeah. And obviously there's like so much more. And in my own personal experimentation with various AI tools, these like ethical questions have come up in me, and these questions about, like, you mentioned creativity earlier. I'm like, is this making me more creative or less creative? And in what contexts, et cetera. And so I think there's someone listening who's like just new in their AI journey, and then others who are further along. And I can imagine hearing that we will see a machine that is smarter than the smartest people in the world, that can solve big life questions—that could bring up a lot of fear. Yeah. And so how do you relate with this information? How do you process it? Like, do you feel scared about it? Yeah. Are you excited about it? What's your experience of learning about what's unfolding?
Stephany Oliveros:
Depends on the day. Okay. So today, tell me.
I see the potential that it has, and I am very excited, particularly in the psychology field. For example, psychiatry has been like throwing darts in the dark until this point. Like we don't really know. Like, there are no markers, or like, in certain cases, but often there are no markers like diabetes, for example, which is a medical condition that you can have a test and a medication to control it. For the rest of mental health difficulties, it's very difficult. It's for psychiatrists to kind of like understand what's going on, for psychologists to support people. So having the opportunity to develop technology that can really help humanity thrive to the next level on life conditions, on fairness, equality, that really excites me. This is why I feel like AI is overhyped in the wrong direction. It's not about productivity or what you can do for a company to have more profits with less resources, fewer employees. It's about what can you really bring to society? And maybe even finally making the playing field a little bit more equal and less biased with the right models, with the right data. The question is like, what is that? How does it look like? How do we control that? But that's another topic. So yes, that part really excites me. And I think most AI researchers will agree with me on that field. I think we cannot hide also the risks of all of this. There are enormous risks of achieving entirely the opposite. Also, if you have something that becomes more intelligent than human, how do we call that? Is that technology, or is that another type of living thing? Like consciousness.
Casey Berglund:
Exactly.
Stephany Oliveros:
So how do we deal with something that's alive? Can we control that? So there are big ethical questions in the development of this super intelligence. But for what we have right now, I think it's okay to not necessarily be part of it. There are certain communities, for instance, certain indigenous communities that are more reluctant to its usage. And it's okay to preserve also your culture and your way of living as we have been living as humans for the last few centuries. But if you're in modern society and you kind of can get the best out of this, and I think it doesn't have to be like you swimming against the current or being a rebel or being a sheep molded by the system or following everybody, you can play with it. Yes. And yeah, so I think that we can discuss certain tips or ways of incorporating this into our lives in a way that is not invasive, but rather helps us.
Casey Berglund:
Yeah. One of the things that I discovered, literally my very first day playing around with ChatGPT—I had received from a program, a course that I was taking that was connected to my business, some prompts that I could use in ChatGPT. And I'd never used ChatGPT before. So I was like, okay, I'll copy and paste their prompts and see what this technology is. And as a few hours went by, I was almost in tears, and my partner came in the room, and I was like, I don't know how to feel about this, but I feel so seen. And I'm speaking to this specifically because of what you shared about psychiatry and the use of AI in potentially understanding, diagnosing, supporting mental health conditions. I, like, had this moment where I, like—it was a joke kind of, but I was thinking of that movie *Her*, where there's like falling in love with a robot. I was like, I feel so seen and heard. And you know, this is something like all humans need, right? It both felt like really good. And I could see how I could use this tool to support my mental health. I'm also someone who does really well soundboarding to hear my own truth in decision-making. And I've realized that, like, wow, there's some good soundboarding that I can do here that helps me move faster in the direction that I want to go. So it felt like really exciting because sometimes I felt like disappointed when I don't feel seen by humans. And here I can go to this machine, and it will instantly acknowledge and validate my feelings and reflect back to me supportive words. Yeah. And anyway, I feel like that was my very first moment of like, is this okay? You know, because I also was like, oh, this is what I want to hear from my partner, for example, and the humans in my life. And I do in many moments. Yes. But then I was like, wait, there was a way that ChatGPT just reflected that back to me that felt really good. Yeah. You know? And so on one hand—and I, I, you would obviously know way more about this than I do, but I saw this infographic that was about people using ChatGPT in particular for therapy, like as a use case. Yeah. And I could understand it, you know, and then it's like, what happens with the humans? And I feel like this is maybe a really like dumbed-down example, given the context and capacity of what you know, when you talk about artificial intelligence. Like, obviously it's not just ChatGPT; there's so much more. Oh yeah. Yeah. But yeah, like, how do we navigate that? How do we navigate the way in which machines can and are replacing humans?
Stephany Oliveros:
Wow. Well, first, talking about the usage, Harvard Business Review, 2025, explained that the number one usage right now for ChatGPT is for companionship, mental health advice, therapy slash companionship. And that speaks a lot about, like, the number one pandemic that we have, which is loneliness. So there is definitely a need, a human need, of connecting with other people. But there is also an importance of connecting with other people in the sense of, like, other people have their troubles as well. And they maybe don't have the space all the time to listen and to acknowledge your feelings. And that is okay. Because we are not the center of the story. Yeah. We are part of a community. We're part of something bigger than ourselves. And the problem with AI is that it's too much of a "yes, sir."
Casey Berglund:
It's too much what? Of a "yes, sir." Yes. It is totally biased. Exactly. It's like it's not challenging me in the ways that I need to be challenged. It could bring me in a bubble of my own thought. Of course.
Stephany Oliveros:
So it's like, uh, uh, what do you think about this conflict that I had? Oh, you are totally right. That person is wrong. And it's like, oh, well, actually I changed my mind about that. Oh, yes, you're totally right. That person is right. And you're right. And it's like, it will just go along as you want it to go. Uh, so in a recent interview, uh, Sam Altman, the CEO of OpenAI, explained that they were trying to make ChatGPT more critical to avoid this and provide a more balanced approach, but then a lot of users were like begging for it to come back because they were really enjoying that part. Like saying like, this has changed my life. Like I have no one, I'm completely alone. And now I feel like I'm not. Like, I'm heard. So how do we balance that? But to what point is it healthy as well? Um, so there is some research that has shown that the usage of this technology that is all the time acknowledging you and hearing you can create psychosis because it creates that narrative inside yourself that is not necessarily attuned with reality. Uh, so I think you can use this, and it's an incredibly helpful tool that can really transform lives used in the correct way. So instead of relying on it as a, like, God-like being that knows it all and looking for the answers that you have within yourself, maybe use it as a tool for introspection. So, uh, you said, how would you, if you go to therapy, how would you like to speak with a therapist? Probably you'll just, like, talk about the situation, and the therapist will reflect back with questions so you can find your own answers. Yes. So it doesn't tell you exactly what you want to hear. It challenges you into arriving at your own conclusions. And I think that is an interesting way of using AI.
Casey Berglund:
Absolutely. And you know, so much of my work is built around helping people reconnect with their body wisdom and use embodiment as a gateway into inner knowing, intuition, intuitive decision-making. And I can see, and I've felt this within myself too, where I'm like, wow, I just outsourced my inner power to this tool. And, and I can feel—it's like my body can discern when that's okay and when that's not, but I also like really have done a lot of work to connect in with my body, and I think because of trauma and the busy world we live in, so many of us are spending our time like disembodied or just really up in our heads, and I can see how the loop of information so fast could actually further this disconnection. Like it could have the opposite effect, you know, like if we're, if we're isolated and then we're getting a hit. And I, I guess I'm using that word intentionally too, because I actually had a dream that was showing me how to be mindful of the use of ChatGPT for, like, dopamine—as an intellectually driven person, like so much information so fast. It feels so good. It's like a dopamine hit, and it could create an addictive loop. It was like my dream was telling me this. And then I got up the next day, and I was like, wow, I think I'm just going to, like, go into nature and tune into myself and take a little break. Like it was almost like a warning. Yeah. And so I just think that that's so interesting, and I appreciate what you're sharing about using it correctly or ethically or with thoughtfulness or mindfulness—that that is an important piece of the conversation.
Stephany Oliveros:
Yeah. So I think instead of just running to it to ask for the answer, we can ask it, like, what are some points of view that I'm missing? Mm. Like what, what am I missing? What do you need to know in order to help me arrive at my own conclusions? Well, I want you to ask me 20 questions about this so that I can reflect on myself.
Casey Berglund:
Yeah. But people don't want to do that work too. Like I even think about folks in the spiritual entrepreneurship world where sometimes you get caught in a loop of like, just like avoiding yourself, whether you're, I don't know, relying too much on your Oracle cards or even the scientific literature, right? It can be anything external that keeps you away from yourself. So I think it takes someone who is really thoughtful and open to doing that introspective work to use it in that way. Yeah. Like, unfortunately, I think a lot of people aren't—they are really just looking for answers.
Stephany Oliveros:
Yeah. Yeah. But it also limits creativity. Okay. So tell me about this, because that's part of your research. Exactly. Exactly. So there is a lovely definition for creativity that I heard recently in another video that I watched with a professor from Stanford University. He was like going to this school, and this little girl was asked—like, the class was asked, like, what is creativity for them? And the little girl wrote this Post-it and put it on a wall saying, like, creativity is not doing just the first thing. Ooh. So like, I mean, I'm paraphrasing, like, just like different wording, but it's like going beyond the first thing. And I think that is what we need to account for when we are using AI. It's not the first answer. Like the first answer that you're going to get, the first output, is going to be something that other million people are using as well. Like, ChatGPT right now, for example, I'm just like talking about it because it's the most popular one, but just for you to be aware, there are many others that are even more safe in terms of data protection and all that, but that is a popular one. They have about 700 million users a week. So the answer that you're going to get, like, I can guarantee that many other people will get pretty similar stuff. So it's like, are we, like, where is your individuality then? Where is your personal, like you said, like your embodiment? You have a unique experience that you can bring to the world. It's beyond the first thing that you see. Use that as a draft and then redraft and again and again, until you create something that is unique, and that's creativity.
Casey Berglund:
Yeah. I've been thinking about basically what AI cannot replace. And this is from my limited perspective. And you might tell me, like, well, Casey, actually—I'll be like, okay. But I've been thinking about it. I'm like, so far from what I know, it cannot replace me, like, reaching across the table and holding your hand and feeling that your fingertips are a little bit cool, you know, and that you have this, like, big smile and sort of like the—whatever is happening face-to-face with another human body. Co-regulation of the nervous system. Like, certainly being acknowledged and validated by a machine does something—you're interacting with it. But there is something about human-to-human interaction. Like I cannot be replaced. The fact that I grew up on a farm outside of a 200-person town in rural Canada, and we collected horse urine for a living because it made post-menopausal drugs. Yes, exactly. And that, you know, I moved to a city and studied nutrition and then, like, went through my first heartbreak and—like, there's some of those things that are universal, but the combination of all these different experiences, they live in my body. The wisdom from those experiences lives in my body. From generations. And from generations, exactly—my grandmothers and grandfathers live in my bones, you know. And so when I think about my work, which is at the intersection of embodiment work, purpose work, like what are you uniquely here designed to be, like, doing in the world or embodying in the world, and entrepreneurship? I'm kind of like, okay, like that. Like AI can replace a lot of things, but it cannot tell me about my deepest wisdom based on experience, you know? So then I think some people could be scared about, like, what could it replace? But I've been like, okay, as a coach who supports people through transitions into doing purposeful work based on their embodied wisdom, it can replace so many things of what I do. Why not work with it to figure out what it can replace? Because what I know it can't replace is what it feels like for that person to sit next to me or to, like, be in conversation or to witness someone else's tears, you know? Yeah. And I don't know where I'm going with this.
Stephany Oliveros:
Oh no, no, this is brilliant. It's brilliant because I think AI brings us as humanity a very interesting opportunity to connect really with our essence and our purpose of life that goes beyond our self-identity, like where you come from or what you do for a living. Yes. Because let's say I'm not that fatalistic—like I am pretty optimistic about the future. Yeah. But let's say in the worst-case scenario, everyone's jobs become useless. Yes. So then who are you without your job?
Casey Berglund:
Exactly.
Stephany Oliveros:
So you are beyond the label. And once you realize that the definition of usefulness changes because of these things that are out of your control, maybe that is the opportunity to really empower yourself into observing who you are. So yeah, like maybe it's not about, like, what am I going to do with my life or what's going on? Like all of that is external. The person that is observing what's going on is who you are.
Casey Berglund:
I have tears in my eyes as you say this because I can totally see how AI can support that introspective journey of discovering, like, who you are as the observer, as a soul, as a being. And I also feel emotions because I'm like, or you could just, like, move to another country and jump into a new relationship. And like, I don't know, I feel like I've had this personal experience of all of those identities that I hung my hat on have in some ways been stripped away, not by AI, but because of choices that I've made in a way that what I'm integrating right now inside of myself on such a deep level is, like, who am I without a partner or a business or money or a home or anything external? Because in many ways, I've been tested through those learnings. And at the end of the day, I think we're all going to get to that point of having to ask those questions, whether we're choosing to or not.
Stephany Oliveros:
Yeah, I believe there is—one of the reasons why there is so much fear is because we are feeling that change. Yeah. And it's frightening to realize that, oh, now I can have the space to really look inside. And it's just like this society goes into the pressure of performing and using AI for that, like the dialogue of productivity or like pushing more, doing more, like get the answers fast, like move on to the next thing. Whereas maybe no, maybe it's just a matter of being. Yeah. And letting go.
Casey Berglund:
Yeah, exactly. So with the learning that you've done, deep learning, what have you acknowledged about yourself or how have you transformed through your research in, like, AI and psychology? Because I can imagine that you have been taken on a deep transformational journey yourself.
Stephany Oliveros:
Yeah, identity crisis, I will call it. Yeah. Yeah. I think I am an entirely different person from what I was before starting studying. Everybody has life transitions and changes. So that doesn't make me unique in that sense. But definitely studying psychology and then understanding that there are bigger things than myself opened the scenario for me. So something that for me has been an interesting discovery, at least for myself. I don't know if this is going to work for everybody, but it does wonders for me. It's like when I feel all this pressure, like crushing me, towards performance, towards, like, who am I? I need to figure out everything right now and have an action plan. I think, like, but I'm enough. It's just that I'm enough. I don't have to do anything. I'm alive because I'm alive. I serve the purpose of life itself. I'm a part of everything, and I'm not special. And I love repeating that to myself. I'm not special. I'm just a part of something bigger than me. And, like, look at the universe, the wonders of life. Like, how can it be more than that? No, that is more than me. I'm just a part of that. And I'm playing my role, and that is my role. That is my purpose. I'm enough. Therefore, I don't have to prove anything. Yeah. And I think that has been one of the reliefs that you can get out of your, like, get out of your own head, basically, to stop overthinking.
Casey Berglund:
Yeah. There's this meme that has been going around, which is—you may have seen it—like the galaxy, and there's this little speck. It's like, here you are paying your taxes and living in fear. Yeah. And that's what your share makes me think about. Like, like, I am enough. And I've been on—I've been on a similar journey. And where I get caught up is kind of in this place of, like, okay, if I'm serving my purpose by simply being alive and being, there's this part of me that's like, well, there's nothing for me to do in my business then, or, like, why bother coaching people into a different sort of career? Like almost, almost like our whole world, capitalistic world, is built on solving problems, selling someone something to solve a problem, to get a certain external outcome. Yeah. And I work in the transformational space and, like, I guess I care about people, like, coming back home to themselves, but I've gone through this, like, crisis, and it's reflected in my business. Like my business revenue has dropped huge because I have to be in integrity with what I'm doing. And I've gone through this space of, like, if I genuinely believe that no human is broken or a problem or needs to prove themselves through doing, like I genuinely believe that humans are good and perfect and you're already serving your purpose. Yeah. Like, what am I selling in my business? Right. You know? Right. And I'm curious, how do you hold "I'm not special and I'm enough and I'm just a speck playing my role in the bigger picture of the universe?" And then also do your work, you know, or, like, like, it feels sometimes so hard for me to hold all of that. Yeah. And I do, right? Like I still—we're recording this podcast, we're doing something with it. I have clients that I help and support, and I just, like, sometimes I'm like, oh God, if you only knew how loved you are. Like it makes me want to cry. Yeah. Like, I'm like, there's nothing wrong. We don't have anything to fix. Yeah. You know? So how do you come into connection with that deep existential truth and then wake up in the morning and go to your research or go give that talk or go, you know what I mean?
Stephany Oliveros:
Wow. This is going deep into philosophy, and I can only tell you my truth about this. And for me, if nothing makes sense, if nothing makes sense, then the only thing that makes sense—sounds super cheesy, but it's love. Yeah. I think moments of being awake are moments of being asleep during life or even the same day. So there are moments when you're, like, fully aware of your thoughts, of your role in the universe, and those moments are pure mindfulness and peace. So like, I think that's the objective. We want peace. And then other things start to flow when we are in that state of mind. Yeah. But you have to be awake to realize that. And sometimes the only way of being awake is having somebody to pull you out of the dark. Yeah. So it's not that we don't have anything to fix, but if we are aligned with what is around us, maybe we can detect somebody who needs a hand or something that might need fixing, not because you can save the world and it will transform a life—like motivations can vary, but because maybe, you know, like, you were there for a reason, and you are playing your role, not as a passive passenger in life, but I don't know, maybe life or God or energy or—call it whatever you want. Once you're there to play an active role in that, and not from the ego perspective of, like, I'm going to be the savior of this person, but maybe from an intuitive perspective of, like, I had a similar experience, and that person, I would love to have a person who was there to listen to me when I had that moment. I can be there for this if they want to and need it. I think the problem is when we step in without them asking us to, or, like, assuming what they need. But I think sometimes it's brilliant to think beyond yourself, and that is super purposeful. Like this is where altruism comes from, like going beyond your mere existence and thinking, is there, like, okay, there are things—like they can be super aware and mindful, but be hungry. It's like, okay, can we resolve this? So you can self-actualize. Like basic needs. Yeah. So yeah.
Casey Berglund:
Thank you for sharing that. I know we just, like, really went into it, which, which I love, I love, and I'm grateful that you went there with me, and I wouldn't ask those questions if I didn't sense or feel, like, your capacity and curiosity in sharing and answering those questions. So, and it is amazing to me to also just highlight that in this conversation in and of itself, the topic of AI brought us to that place. Like this kind of circles back to what you were sharing around this isn't just about productivity hacks. Yeah. This is about, like, helping us as humans ask more philosophical questions and answer the questions that we've been asking since the beginning of time. Yeah. Yeah. Yeah.
Stephany Oliveros:
I think AI research needs more people in the humanities—needs artists, philosophers, ethicists, lawyers, educators, psychologists. It needs it because there is no way of creating a system that truly understands embodiment and truly understands the way that we see things in order to make sure that it's as safe as possible without the input of the community. And we are the input—people from multiple backgrounds, from multiple perspectives to really pitch in.
Casey Berglund:
Yeah. One thing you didn't say that I'm going to say is it needs women. Huh. So yeah. It needs women. So tell me about that. You are a co-founder of SheAI. Tell me about the mission of SheAI and why it's important to have women involved in these conversations, in research, and usage of various AI tools.
Stephany Oliveros:
Yeah, absolutely. So imagine that you have an AI system that is in charge—not like a GPT, which also—but is in charge of detecting or diagnosing cancer for specific patients. But it has been trained mostly with data of male patients of certain age ranges and certain cultures and certain habits. So how do you think, how accurately do you think it's going to diagnose a woman that comes from an underserved community where you have very scarce data from? So we can have the opportunity with AI to close this gap or to really amplify it in a way that we can't control it. And the same thing—this is with medicine, but we can find these things in hiring process flows in companies, for example, recruitment processes. We could find these biases in bank loan applications as well. So are we truly empowering people and equality with just a system that is what is doing is reflecting the bias of society? Something even more deep than that, or, like, well, not deep, but it's dangerous, is in the case of humanitarian action. So if you have an AI algorithm that is trained with certain bias, in the case of an earthquake, which is something that happened recently—in the case of a natural disaster, it would recommend deploying action into certain areas with certain conditions that favor more groups than others, often leaving women behind. So that means that we have a problem with data. We have a problem with how these models are trained, the sources that they're getting this data from, and this is creating an even more biased and segregated society than the one we have right now, and it is one of the main things that we need to challenge. So there are multiple ways we can address this. Now they are kind of resolving this with something that's called synthetic data. So they basically make up the database on specific patterns, let's say—a bit more complicated than that, but, like, the high-level view.
Casey Berglund:
Thank you for keeping it simple. Trying to find the right words.
Stephany Oliveros:
You're doing great. Another interesting solution could be something that's called small language models, which is, like, a small version of such a GPT that is very, very good at one thing. So instead of, like, being able to create music and drop images and videos and code and maths, it's like, no, it's just one specific task because it's more energy-efficient as well and a bit more reliable. So there are some tests on the ground of how can you deal with that? So something that we are doing with SheAI is—first, you start with education. We need to learn, to show people what, where are these problems happening in different industries, not only for coders, not only for the people who are developing AI, but if you're a doctor, if you're a nurse, if you are just a mom in your house, figuring out the next move, you need to understand this. Even if you don't participate, even if you don't use it, you need to understand it in order to understand what's going to happen globally with the economy, with your country, with your community, with your kids. It's going to affect us all. So this is what we're doing in SheAI—bringing up that education across multiple industries. And the next thing that we would like to do is to start our own research into how can we create a model that precisely tackles this bias? So how can we gather, like, true, meaningful data from women, from diverse backgrounds who can—that can help us, like, address this?
Casey Berglund:
Wow. It's so incredible and so necessary. I think it was you or one of your co-founders that shared how, like, the percentage of users—I can't remember if it was ChatGPT or something else—that are men, like I think it was 80% or something. Is that right? Yeah. Yeah. So yeah, if, if it's 80% men that are using this tool and it's a tool that's trained based on use, then we're getting male-dominated reflections back.
Stephany Oliveros:
Exactly. Exactly. And you're just leaving people behind. I think the playing field is a little bit more even than it was before. Okay. And what is interesting is this memory function that it can recall your conversations and learn from that. So it can be more or less pretty insightful into, yeah, your cultural background-ish, your thoughts of the world, but it's pretty limited. Still is pretty limited. I highly doubt that people in Silicon Valley understand the life of a mother in Tanzania—highly doubt that. So the problem is that they are not including in the conversation people, the end user, and they are shaping the future of the world without accounting for democracy. Like we are not even given the voice to vote if we want this or not. Yeah. So they are just deciding, releasing products, and seeing how it goes. Yeah. So this is something that—this is why we need education on this topic. And secondly, yeah, like we need to bring up more people from all these backgrounds, more women, to kind of, like, well, if it's going to give answers that are deciding on, like, the health of people, on the systems of education for kids, on political decisions. Well, maybe we need the input of more diverse people.
Casey Berglund:
Yeah, exactly. And of course, it's women who are spearheading that collective care conversation, you know?
Stephany Oliveros:
Yeah. Yeah. Like one of my co-founders is an anthropologist, and she said that research has shown that women tend to empower the communities they surround, so if they, like, earn income to, like, buy something for somebody that needs it or, like, to educate other people or to support other people in other ways, like, it's not as individualistic. So it's really interesting to understand, like, why do we behave in that way? And that needs to be part of the AI conversation. Because if we ask AI to create a business plan for a big corporation and to analyze behavioral data, and it only has or mainly has biased limited data, it just understands the world in a specific way, and it's not necessarily the right way. And many could be used in different ways depending on the situation.
Casey Berglund:
Yeah. So, like, obviously I feel hopeful knowing you and knowing your co-founders and knowing that SheAI exists and there's, like, education, and you're supported as well by the United Nations, which is incredible. I feel hopeful and grateful knowing that. And, like, are there other groups of, like, women or people from different areas? Like, do we have hope of, like, balancing the maybe male-dominated or Silicon Valley-dominated? Like, how are we doing?
Stephany Oliveros:
Well, I don't think at the point that we are right now, we can even think of being someone compared to the gigantic market share that these companies have. However, I want to believe in the power of the community, particularly women who are more community-driven. So yes, there are multiple organizations and multiple groups and companies that are starting to bring these things, like, bring these things up and connect us all. And this is something that we want to do through SheAI as well. I don't want to be a gigantic massive corporation that holds the monopoly. I want to be a small specialized company that aligns with many others so that we can create a big collective. And with the big collective is where we can, like, position ourselves into other fields, let's say, maybe we can go to schools, through universities. We can go to local governments and start saying, like, why are you going to use this when you can protect the data of your users and you can support your community by having this solution, which, by the way, is more cost-efficient.
Casey Berglund:
So I think she's a businesswoman as well.
Stephany Oliveros:
Yeah, like you have to talk the language. So yes, I think there is definitely hope. What is interesting about AI is that it's not a technology that is patented and unique to a specific name. It's openly available for everybody. And now that it is a bit smarter than before, it's a great opportunity to do more than just be a passive user, to start building, testing, being curious around it. Like it really pushes you to try it and try it not for answers, like we were discussing, but for self-reflection, for ideas, to, like, find ways of creating, not just leaving it in your first draft. Yeah. That is how we create hope, like building something that is not only in the hands of a few. Yeah.
Casey Berglund:
So I feel like you telepathically picked up on what I was going to ask you and started to answer it because in my mind, I'm thinking about obviously the personal use case for the average woman who's listening to this podcast and wants to use AI tools to make her life better or to discover, like, who she is or to help her on her introspective journey or improve her mental health. But then as we got into this conversation, there's the bigger piece that is being illuminated about the importance of women's involvement and not just women, but, like, diverse groups. Yes. You know? And so it makes me wonder, like, how else can someone be part of that bigger evolution?
Stephany Oliveros:
Yes. Yes. So I have a great example here. It's called vibe coding. Vibe coding is just writing in plain English or your language, giving commands to a large language model like Claude or ChatGPT or Google Gemini to create an app. It does the whole code for you. You just have to, like, maybe test it, see that it works and what doesn't work. Just write again in English. This doesn't work. Do it again. Yeah. And then until you can get to something interesting. As an example of this, I was looking at this video of an interesting influencer called Shihira Kaunis. Ask ChatGPT, and she's an AI consultant. That's smart. I like that tagline. Yeah, no, no, it's lovely. I count as well. I really like her. And then she was explaining about this use case of somebody who created an app—not an app, like a landing page, something very simple that helps you figure out how to create soap at home. So what is the right proportion of coconut oil and herbs for this specific soap that you want to do? And the way that it is—this person that has this hobby of doing this, but they know that there were many people struggling to find the right recipe, so finding the amounts and calculating. So it's like a calculator, and you put the ingredients, and automatically it gives you different recipes to try it on. And that is super cool because it shows you that you don't need zero coding experience. I literally just went to one of these models and asked to create it. It created—like it changed a couple of things, but ready to use. So this is what I mean, like that, going beyond, like think about what do you like? What are your hobbies? Uh, what, what is your expertise area, and what is missing there? And where can you put your creative input into it? So, so test and create. And I think that is something that I am more excited about than asking ChatGPT to organize your travel itinerary or to write an email.
Casey Berglund:
Yeah. If I, for example, use one of these models for that purpose, like a soap example, what does that do for the bigger picture, the bigger mission of having more diverse voices using these platforms? Like, like what impact does that actually have that's bigger than me creating something?
Stephany Oliveros:
I mean, in that sense, not so much because you have to, like, really train the model in order to change how things work. Okay. So that would be, like, separate areas, let's say. So one area that I think is interesting is exploring creativity and doing something interesting that is missing, let's say, in your area. And another way, it's, I think through, definitely education is the first step. SheAI.
Casey Berglund:
That's where SheAI comes to say, we'll put the link in the show notes.
Stephany Oliveros:
So yeah, you need to understand what you are playing with, you know, so you start from there. And once you start from there, at least you, you, it has shed the light on what could be some next steps. Some potential next steps could be—there are very good and interesting research institutes that are creating datasets or surveys, research that needs your inputs. So I think that could be something to go toward. Yeah. I love that. But definitely I think it's trying to spread the right messaging is the key of all. Like that is how you really change things. And, and, like you said, like, this is not acceptable. For example, deepfakes, like, or being able to recognize when something is created by AI. Oh, no. At some point, maybe we would not be able to recognize that anymore. So it's like, okay, I need to develop my critical thinking to not believe everything I see. Yes. So as that individual, it feels like a little bit powerless to say, like, no, you can't really, like, retrain the model that is there available, but you kind of do because you can still, like, give it feedback on your perspectives of the world, on, like, saying this is not acceptable and this is acceptable, on learning these things, on how to interact correctly, and then creating and innovating and, like, by tackling all of this plus by being educated, you can exercise your right to, like, go to the government and say, like, we want this and we don't want that. Then that's for sure the way forward, I think.
Casey Berglund:
Yeah. The whole "when you know better, you do better." So educating yourself is the first step.
Stephany Oliveros:
Of course, of course. Otherwise we'll be just like passive users waiting for these companies to release their news updates and they say, oh yes, I'm trying this. It's super cool. But what goes beyond—like they are literally deciding on your future without you even knowing it. So it's like, no, you need to be aware of that and be able to voice that as well. So to create that governmental pressure for them to put certain boundaries and that the people decide how far they want to go and not the technologists only. Wow. Knowledge is power. Yeah. Knowledge about yourself. So, like, let's not allow external things to make this blur, like, to blur this in ourselves. Yeah. But rather using it as a smart way to bring that out.
Casey Berglund:
Bring out your inner empowerment and radiance and essence. Yeah. I love it. Thank you so much for joining me here.
Thank you. Wasn't that a fascinating conversation?
Thank you so much for listening. What really stays with me is this piece about knowledge being power and not just knowledge about AI, but also knowledge about ourselves. Because at the end of the day, no machine can replace your lived experience, your body's wisdom, or the love you bring into a room. If you'd like support with reconnecting with that inner compass in your work and life, please check out my free training, Your Pathway to a Calm, Purposeful Career or Business That Fuels a Life You Love. You'll find the link in the show notes. And of course, if you'd like to connect more with Stephany and her work at SheAI, all of her links are below in the show notes as well. Thank you again so, so much for listening. If this episode sparked something in you, share it with a friend. Again, make sure you're following the Purpose Map podcast so you never miss a single episode. Until next time, remember your wisdom matters and your involvement with technology does too.