Human Leadership in the Age of AI: Oliver Smith on Ethics and Transition

by | Mar 31, 2025 | Podcast

Today’s leaders are navigating a rapidly evolving landscape, one where artificial intelligence promises both unprecedented opportunities and complex challenges. To effectively lead in this new era, a strong understanding of AI ethics and the ability to manage significant transitions are paramount.

On this episode of Life + Leadership, we had the distinct pleasure of speaking with Oliver Smith, the Founder and Director of Daedalus Futures, a Responsible AI consultancy. Oliver brings a wealth of experience from his time as part of the founding executive team at Koa Health, where he was responsible for strategy and ethics, and his prior roles in UK government and healthcare innovation. His journey offers invaluable insights into the intersection of human leadership and technological advancement.

Oliver shared his perspectives on the current mindset of executives towards AI, the critical ethical considerations that must be addressed, and the very human experience of transitioning into a new field. His thoughtful approach underscores the necessity of grounding technological adoption in fundamental human values.

Understanding Executive Perspectives on AI

Oliver highlighted a shift in how leaders are approaching AI. While there was initial “corporate FOMO” driving a desire to implement large language models broadly, he observes a move towards a more sober and value-focused perspective. He advises leaders to focus on how AI can augment their business’s core value proposition rather than simply replacing existing functions. For businesses where value lies in high-quality, deeply considered advice, Oliver suggests that the speed of an LLM might not outweigh concerns about accuracy and potential “hallucinations”.

He emphasizes the importance of a “triumvirate” approach: focusing on desired outcomes, company values, and continuous engagement with clients and employees to guide responsible AI adoption.

Navigating the Ethical Landscape of AI

When discussing the ethical integration of AI, Oliver urges CEOs to reflect on the trade-offs and difficult conversations they’ve already had concerning their products. He posits that these moments of wrestling with different value judgments are inherently ethical conversations. His key message is that ethics isn’t a separate exercise but is already embedded in the decisions leaders make. The task then becomes to bring these ethical considerations to the forefront and contextualize them within the impact of AI on customer outcomes and company values. He stresses that a deep philosophical background isn’t a prerequisite for engaging with AI ethics; it’s about being conscious and explicit about the value judgments being made.

Debunking the Myth of the “AI PhD”

Oliver shared his skepticism regarding the concept of “genius AI” or “PhD level AI”. While acknowledging AI’s strength in pattern recognition and data analysis, he questions whether it can truly replicate the creativity, lateral thinking, and intuition that characterize human expertise, particularly at the doctoral level. He reminds leaders of the inherent value of their own knowledge and experience, noting that he often finds AI less insightful in areas where he possesses deep expertise. He cautions against the idea that AI will simply replace human professionals, emphasizing the unique contributions that humans bring.

The Human Side of Transition

Oliver candidly discussed his own career transition from a long tenure in healthcare to establishing an AI ethics consultancy. He described the intellectual stimulation of learning about technology but highlighted the emotional challenges of building a new business, including moments of doubt and feeling like an imposter. He shared valuable insights into navigating such transitions:

  • The importance of a supportive network: His wife’s unwavering support was crucial.
  • Staying rooted in personal goals: Pursuing a Masters in AI Ethics and Society provided both academic grounding and a sense of personal fulfillment.
  • Leveraging existing strengths: Rediscovering and applying his policy background to the AI space brought renewed purpose and confidence.

His experience underscores the idea that transitions, especially significant career shifts, are not purely intellectual endeavors but deeply emotional ones that require support and a connection to one’s core strengths and aspirations.

Practical Advice for Leaders Engaging with AI Personally

Oliver offered practical advice for senior leaders seeking to better understand and engage with AI:

  • Use it directly: Experiment with AI in your day-to-day work, especially for tasks involving thinking and analysis.
  • Treat it as a thought partner: Explore its potential to offer different perspectives.
  • Establish a baseline: Before using AI, jot down your own initial thoughts on a query. This helps you ask better questions and avoids anchoring bias.
  • Be mindful of limitations: Recognize that AI may excel at data synthesis but may lack nuanced understanding of human and emotional elements.
  • Challenge its responses: Don’t accept the first answer as definitive; push back and ask for alternative perspectives to uncover potential limitations or biases.

The Enduring Importance of Human Leadership

In closing, Oliver emphasized that leaders must embrace transitions, both technological and organizational, with their whole selves – intellectually and emotionally. He stressed the need for leaders to have the right support systems in place to navigate these emotional aspects effectively. Ultimately, he sees the “real joy of leadership” in guiding teams through challenging transitions, fostering growth and resilience along the way.

By grounding the adoption of powerful technologies like AI in a strong ethical framework and by leading with empathy through inevitable transitions, we can ensure that progress serves humanity and fosters thriving organizations. Oliver Smith’s insights offer a valuable roadmap for navigating this complex and exciting future.

People in This Episode

Oliver Smith 

TRANSCRIPT

Tegan Trovato
Ollie Smith, welcome to the podcast.

Oliver Smith
Pleasure to be here.

Tegan Trovato
We are so lucky to have you here today. For listeners to know, we share some really lovely connections in common. And one of the things I’ve loved about getting to know Ollie is that, well, I’ll speak to you, Ollie.

One of the things I’ve loved about getting to know you is that you’re just really authentic and personable about the experience of leadership. And you’re doing some really cutting edge work right now, which we’re going to dig into. But I appreciate that you’re able to acknowledge the intellectual experience and the emotional experience of everything that you are working on and also what’s going on in the broader world related to your work.

So we are going to talk a lot today about the theme of transitions. And we’re going to go all over the place in terms of all the places transitions can happen. And listeners need to know at the top that part of this conversation is about transitioning to the ethical use of AI.

So we’re going to talk human and we’re going to talk tech transitions. Sound good?

Oliver Smith
Yeah, sounds great.

Tegan Trovato
Well, to kick us off, why don’t you tell listeners in your own words, because we’re sharing your bio, but in your own words, tell them what you do for work and what you’re really spending your intellectual capacity and your thinking time on right now.

Oliver Smith
So what I do for work is I help organizations create transformational and responsible strategies for AI so that they’re not just complying with how to use AI responsibly. They’re actually using it in a way that actually transforms what their organization does. And as you said, I’m thinking a lot about transitions at the moment, and that’s transitions in the workplace and how organizations can best use AI, but also thinking about as AI just seems to be accelerating even more, like every week, there’s a new, even better, even cheaper AI available.

What does that mean? And I’m particularly thinking about human governance of AI at the moment. I’m working for a client that’s very much focused on putting people at the center of how societies make the best use of AI.

And that’s really exciting. It’s also really challenging, which is very intellectually stimulating at the same time.

Tegan Trovato
Yeah. I mean, as an owner of an AI ethics firm, what do you see keeping CEOs and execs up at night related to AI and the integration of AI?

Oliver Smith
I see that at two levels. I see one level is quite personal and how they use AI themselves and how they’re able to kind of make the best use of it. But I also find they think of it, of course, at an organizational level as well, and kind of what it means to them. And I think that the personal one somehow is hardest.

And this is something that I also feel as well. I have this sense whenever I am using, whenever I’m using, but quite often when I’m using, whether it’s Claude or Chat GPT or whichever, am I doing it wrong? Is there some extra thing that I should be doing that should get a better result?

Because quite often I’ll use it and think, but this isn’t that great. This isn’t the kind of the amazing kind of groundbreaking results that I expect. Now, sometimes it is good and sometimes it can do things that I could not do myself, for instance.

So sometimes I would use it for creating images for presentations and wow, is it really good at doing that? I’ve noticed how it’s improved over the past two years. So people, when you asked it to do an image of a person, it no longer has six fingers, which is helpful.

I’ve seen that since, but there are other things where I ask it to do. That’s kind of an okay summary, but do I really trust it? And I find that a lot of execs I talk to kind of have similar worries.

And the client I work with, I actually think he’s very good at integrating it into his workflow and knowing when to use it and knowing when not to use it. And I think that it seems to be that everyone’s struggling with that at the moment on a personal level.

Tegan Trovato
How about at the enterprise level?

Oliver Smith
The enterprise level. So the enterprise level, it feels like there’s a sort of corporate FOMO and everyone’s got this sense of, we just need to integrate, we need to use AI, we need to do something. That was certainly the story last year.

I think it actually has slightly calmed down this year. I think people are being much more sober about it, but there was a time when everyone was just wanting to kind of use some sort of large language model for almost whatever. And they weren’t really thinking about the consequences of that.

And the kind of conversations that I was having and I’m still having to in a certain extent are focus on what creates value in your business. Focus on what value your business creates for your customers and your clients and how therefore can the AI help with that? And often that isn’t replacing what you do, but it’s an augmentation question, but knowing where you really do create value really helps.

I think you understand, is this going to be helpful for me or not? So if your value comes in that really, really super high quality advice, it doesn’t need to be the fastest advice, but it does need to be absolutely right. Then maybe an LLM is not going to be the thing that you want to be using because it’s going to be good at speed, but you’re always going to be a bit worried about has it hallucinated in some way, shape or form.

So I’m always advising companies, focus on the values you create, focus on the values you have as a company as well, and also continue to engage and talk to your clients, your employees and other stakeholders you have. And that triumvirate of the outcomes you want, the values you have and the people that you talk to, that is going to see you true to be able to use the AI in a way that’s, yes, it’s going to be transformational for your business, but it’s also going to be responsible, it’s going to be ethical, you’re going to use it well because you’ve been really thoughtful about it.

Tegan Trovato
To peel back the layer on the ethics, if you would, what are maybe a couple of things you would ask CEOs to be thinking about when it comes to the ethical integration and transition to AI?

Oliver Smith
I will ask them about the trade-offs or the difficult conversations that they’ve had, particularly when they’ve been thinking through their products. And those trade-offs might be how quickly can we ship this versus kind of doing an extra bit of work to kind of improve the accuracy of the algorithm, for example, or how have you thought about the question of supporting privacy versus kind of having a very low friction onboarding process. So I ask them, think about the conversations you’ve had in your team where you said, should we do this or this?

Where are there tricky conversations? Those are your ethical conversations. You may not have realized it at the time, but those are the conversations where you are wrestling with different value judgments.

And so one of the really important points I always try and get across to leaders is there isn’t a moment where you haven’t been ethical. It’s not like you can just start, and now I’m going to be ethical. Today is our ethics day.

That doesn’t work like that. You’ve always been making value judgments and you’ve always been balancing, well, we’re going to have to decide that this is more important than this. And these are the reasons for it.

And those are your ethical judgments. And so you already have an ethical stance. And what I try and support them to do is say, well, let’s bring those out.

Let’s be explicit about it. And in being explicit about it, we can contextualize that in how is AI affecting those decisions? How is AI therefore then affecting those decisions and how they create better outcomes for your customers?

And is it going to break your values? Is it something we need to further engage your clients, your employees about to see actually are these new ethical balances, new ethical judgments that we need to make? Are they against what we may have made in the past?

So I try and really integrate into what you’ve already been doing this. This isn’t magic. You don’t need a PhD in philosophy to be doing AI ethics or to implement AI responsibly.

Tegan Trovato
What I appreciate there is just raising our consciousness around what is an ethical consideration. And you made the comment that often we’re engaged in those and not really conscious of them. So yeah, I could see where that’s the beginning of the work oftentimes with clients, if I had to guess.

Yeah, beautiful. Absolutely. And you mentioned needing a PhD in AI.

So that’s a phrase you’ve used with me offline. So tell us where that comes into play. The phrase, AI PhD.

Oliver Smith
As I said before, there’s lots of improvements all the time in AI. And as people are talking about artificial general intelligence, and as we move to AI that’s as capable as humans, on a very large range of tasks and actions, that I’ve started to hear people talk about more and more genius AI or PhD level AI. And people say, imagine if you had a team of 10,000 PhDs, and what would you do with that?

And I think there’s a number of challenges to that. I mean, I think for one, I’m not sure that we should be saying that the highest achievement that a human can possibly have is to have a PhD, because I think there’s lots of other forms of human achievement, whether it’s writing a fantastic song, or amazing dancers, physicality, or amazing sports people that are just able to do things that I certainly I couldn’t never even imagine doing. And there’s lots of different forms of human achievement.

So one is that I’m not sure that’s the right way of looking at it. But the second is, kind of going back to kind of my experience, and am I doing this wrong, is that it’s quite far from getting to the level of creativity and adding something new to human knowledge that I associate with a PhD, with a doctorate. I think AIs are really, really good at finding patterns and seeing and saying, okay, well, there’s loads of data and no human can possibly analyse all this data.

And so we’re going to come up with some insight from that. But they haven’t necessarily kind of added anything conceptually new. They haven’t thought laterally and said, oh, you know what, actually, this concept, maybe that relates to this concept.

And so maybe that leads to something totally different. And that’s, I think, where the creativity of humans comes in as well. And it’s also where I think that going back to the kind of conversations I have with leaders about their own personal worries, what I’ll say to them is, as well as thinking about your business and the value that your business creates, remember the value that you give as an individual, and what you have, because you know a lot, and you have a lot of experience of using that knowledge.

And that’s really valuable. And I find that when I’m most frustrated or disappointed in using an AI, it’s actually where I know most. So if I’m asking it about healthcare, which I know a lot about, or even kind of AI ethics, which you also know a lot about, I find that’s actually a really disappointing answer.

Or because I know the right way to ask the question, like, I can go further and still get a reasonably kind of an average answer. But I don’t want an average, mediocre answer. I want a really good answer.

And so I would say to it is, well, you need to back yourself because you know a lot and you will still be able to succeed. Yes, you should use the AI, but don’t think it’s going to replace you. Because if you think that you’re going to be replaced by something that can come up with the sort of lowest common denominator answer, and that’s being a bit unfair to AI, I should say.

But there is a lot that you add that the AI can’t add. And I think when people talk about PhD level AI, I think they miss out on all of the, A, the other ways of achieving that humans have, but also the fact that actually what really comes from knowing a lot is being able to ask the right questions and somehow having this intuition, you know, when you can’t quite put my finger on it, but I just know that that’s not right. Or I know that that somehow links to that.

I couldn’t tell you why, but I’ve got this spidey sense after having worked on this for 20 years, that that’s going to happen and the AI can’t do that. Right.

Tegan Trovato
Beautiful. So interesting, though, this is not where you spent your life. So let’s transition into your transition into AI.

So when we connected, you were telling me about at a high level, like the emotional experience of your career transition from, yeah, I’m going to call it your previous work life into your current life. So tell listeners what that transition was from what into this.

Oliver Smith
I had worked for about 15 years or so in healthcare and I’ve done that in UK government. I’ve worked with the Department of Health for many years running different policy areas, like tobacco policy. I worked for a large foundation, so I was spending their money on health innovation.

I’d also worked for a health tech company, digital mental health tech company, and I was their strategy director and head of ethics. And it was really in that company where I’ve really got into technology and especially AI, because the company was called Co-Health, trying to use AI to help understand people’s mental health and help also help to guide them through what they were going through to get to battle mental health as well. And that was both wellbeing, but also mental illness as well.

And so I really, really got into kind of AI and that’s also where I got into AI ethics, because very early on at Co-Health, we recognised that why would anyone trust this company with their very personal and private mental health data, right? With their feelings, very personally, why would they trust us at all? And also why would they trust the recommendation from just some random app on a phone in this instance to do this?

Why would they think that’s going to improve their mental health? So we decided that we needed to take an ethical approach because that would build trust in what we were doing. And so that was how I got into the mix of kind of AI and ethics together and really, really loved it.

I have to say I kind of found that it was something that I found really intellectually stimulating. I thought it was important. It was really integral to the company and the culture of Co-Health and really part of how Co-Health was able to win business as well.

I left Co-Health at the end of 2022 and I thought, I really like this and it seems like AI is going to continue.

Tegan Trovato
And… You were so right about that. That I got right.

Oliver Smith
I said, why don’t I try and make something of this kind of AI ethics? Because people were talking about ethics or responsible AI more and more. And I thought, well, this is something I think I can contribute to.

I may not have, I suppose the academic creds here, but I feel that having spent six years, having been a head of ethics in a tech firm, I think I had some real world experience that I thought could be valuable. And so I decided to try and build a business out of that. It’s only really after two years that I can say, you know what it’s like when you run a business, right?

You can never say with, yeah, this is going to be fine forever and ever, ever. No, that’s not how life is. But I am feeling, I think for the first time, much more comfortable.

It’s taken two years to get there.

Tegan Trovato
Yeah. So as an outsider listening to your story, you moved from something that was arguably very human centric and philosophical and emotional focus on people’s health with an integration of technology. And now you are looking at the technology, maybe not as the primary, actually, you’ve described it more as running in parallel to what’s the human experience and how do we leverage the tech?

A fair statement?

Oliver Smith
Yeah, yeah, yeah, yeah. I always start with the human.

Tegan Trovato
Yeah, good. Thank you. That’s part of the ethical considerations, I’m sure.

But when you first went into a full focus on AI, I have to imagine that was the learning curve, since you were coming from a very clearly human centered station for many years. So what was that like for you?

Oliver Smith
That bit actually was pretty intellectually simulated, because I recognize that I am not going to become a data scientist. I don’t want to become a data scientist. I also don’t think I’d be a very good one.

But I did want to learn enough about here’s how AI works, and be able to ask the right questions and be able to understand what people are talking about and be able to engage with different parts of organizations. Because one of the aspects that I really enjoy at Deco Health, and I still enjoy now when I work with organizations, is working with different teams and helping to translate across different teams. Because the R&D team will talk a slightly different professional language, quite often a very different professional language to the sales team, which is different, again, from the product team.

And I really like that. So learning the technical side was intellectually stimulating, I found. But that probably wasn’t the most difficult bit.

And I thought that would be more difficult. So I think you’re right to ask about that. But what I actually found most difficult was what I thought would be a kind of market opportunity always seemed to be just a little bit further down the track.

It’s just down there. And it wasn’t really emerging. And that, I found, people would talk a lot, but they wouldn’t necessarily then do anything about it.

And I think that was the thing that I kind of struggled with most.

Tegan Trovato
You told me when it came to making this change, you said, you know, intellectually, a transition will be difficult. But once you’re in it, you are surprised. So what would you say more if you were to tell us more about that?

Oliver Smith
That was a real wake up, I think. It wasn’t also, that makes it sound like kind of one day I woke up and I was like, well, this is really hard. It’s just kind of this growing sensation, this growing worry.

And I went from having not just a stable position, but also a position where I knew what I was doing. And I knew what the likely problems were and how to solve them to something that I was still learning about. And although I had a lot of experience, I didn’t feel like I was where I needed to be.

And I was worried that people might find out. And when the money’s not coming in, although I had a wrong way, after a while, you start to get a bit, what have I done? This is insane.

I’m trying to do something that maybe I won’t actually be any good at at all, because who knows? I’m kind of struggling to kind of get the clients in. And I think more than anything, it was just this sense of I’m trying to build a new version of me, a new kind of work version of me.

And it’s really hard. So what does that say about me? I found it really emotionally very challenging.

And there were moments where I was just like super upset and like, what am I doing? What am I doing? This is insane.

Tegan Trovato
Yeah. Thank you for sharing that. And for being honest, this is a place where I wish more leaders were honest.

They’re honest with us executive coaches behind the scenes. I mean, it’s literally part of what we help a lot of execs navigate. But people need to remember that no matter how experienced we are, when we step into something new, the tendency to feel like an imposter, especially if we have some healthy humility about us.

We should start looking at imposter syndrome as a marker of health in a way. We have some healthy questioning about capability and not overestimating ourselves sometimes. You’ve obviously worked through that.

And if it shows up, you know what to do with it. But I appreciate you being honest because everyone I know has experienced those moments. And often they’re a signal to us that we’re about to launch.

It’s sort of that last moment of self-check before things start to move. I don’t know if that resonates for you, but that’s how I’ve experienced it personally.

Oliver Smith
Yeah, I know that does resonate because there’s probably a feeling of restlessness when I was in my job at COA. I mean, during the job, I enjoyed working with the people, but a sense of, I want something else. I want to do something else.

I’m excited by this new thing. And that restlessness was the thing that made me take the leap. But then after a while, the excitement subsides a bit and the worries come back.

And the voices on this shoulder of, you can’t do this. You’ve done the wrong thing. You can’t pay the mortgage.

What on earth are you doing? You should have gone back. You’re going to fail.

You’re going to fail. That’s the big one that for me, anyway, they’re going, you’re going to fail. And that’s not a comfortable experience.

But what really helped me is, well, firstly, my wife was super supportive through all of this. And she is someone that is also very steeped in the world of kind of humanities and technology. And so she said like, this is really important.

You are doing the right thing. I know it’s hard. And she was very supportive.

The second thing that helped actually was that I decided that now is the moment to do a master’s. And so I’m just finishing off the next few months, a part-time master’s in AI ethics and society to try and build out the academic side of what I know more practically. And that was a bit of a long-held dream, actually, to kind of go back to university and study again.

And doing that really gave me something that, oh, great. Like I’ve been accepted. I’ve been accepted on the course.

And so kind of someone thinks that I’m not completely dumb, right? I’m not completely useless. And that’s helpful, but also really enjoying the work.

So I felt like I’ve got that other kind of string to my bow that I’m doing. And then the final thing that was unexpected, particularly unexpected, but great, was that I rediscovered some of the policy work that I’d done back in government and back when I was at the foundation. And I’ve really enjoyed bringing that policy then much more back into my work and applying it to AI as well.

And I forgot how much I liked all of this. And when I talked earlier about how I’m thinking of governance as kind of society-level questions, not just organizational-level ones, that’s also, I think, really where that rediscovery came from as well. Now, actually, I really like that.

So those three elements really, when times were really quite tough, they really helped get me through.

Tegan Trovato
So what I’m hearing then is your ability to act despite your doubts or despite the ambiguity that you’re just naturally in from changing technically industries or roles, supportive partner, big box if we’re lucky enough to have that, but others may have to find friends, family, right? But having that human support and cheerleading, yes, that’s a big yes. Second your masters, you were staying rooted in other dreams that you had that had been long held.

And also it aids in your work, which is always good, ideal if we’re going to invest that much in all those ways. And then also hearing like you might have transitioned into another role and another focus area, but you did keep your hand in things that were known to you and that were your strengths. And I’ll say as an executive coach, we always encourage our senior leaders to stay rooted in their strengths and perform from that, not to try to solve for too many gaps at a certain point in our career, right?

So that’s exactly what you’re doing here. It makes a ton of sense. And I can hear the peace coming through your voice when you’re like, yes, this is the thing I know that I’ve known for so many years.

And I found a way to pull that in and stay rooted in that. So I’m re-saying these things back to you for the sake of our listeners. It sounds so structured and sensible when you say it.

It’s almost like it was planned. Ollie, it also sounded so lovely when you gave it narrative context. I mean, that’s what people remember is our stories, right?

But I do want our listeners thinking, because we are, and we will continue to be in such a very volatile and ambiguous time in the world in general, and certainly at work. And so when we don’t have a ton of clarity, it is our responsibility to create our anchors and to create as much support as we can in our lives. And that’s exactly what you just demonstrated here.

So thank you so much.

Oliver Smith
No, you’re welcome. It does feel like there is a coming together of the transitions. I know we said the theme is going to be transitions, but I genuinely think it is because the transition that AI is allowing us to go through as a society.

I’m not going to say it’s bringing it to us, because that suggests that AI is somehow separate from society, but it’s a core part of society. And it’s our choice as to how we use it, but it’s allowing us to go through more transitions. We’ll come down to the personal level as well, and we’ll probably end up having to go, a lot of us, many of us, through personal transitions, certainly in work, either big or small.

And what you said, I think, really nicely encapsulates what I do think people need to do, in my experience as well, of using AI. And there was some initial research, and the jury’s, I think, still somewhat out on this, but there was some initial research that suggested that AI might help to level the playing field. And so that the people that were, let’s say, below average performance, if they used AI, that they would become above average.

And people that were already above average, they wouldn’t actually see much gain. But I think it’s actually very, very dependent on the task that people are doing. And there’s also other evidence that suggests that actually, the people that make the best use of AI are the people that know the most, and they’re able to understand which questions to ask.

They’re able to understand when it’s telling them rubbish, or whatever it might be. And actually, that feels to me, much more my experience. It also does feel that it’s the common thread through a lot of technological change over centuries, that it’s the people that have the most advanced skills that do the best.

So I think your point about leaning into what your strengths are, I think all human strengths will still be necessary, whether that is kind of conceptual work, whether that is physical work, whether it’s caring work.

Tegan Trovato
It’s still going to be super necessary because the economy is basically an economy of humans and what our desires and wants are, and we’re still going to want all of those things, right? So I think there’s still going to be a need there, but leaning into what people’s strengths already are, and then kind of finding the way that they can use AI to make best use of those. I think that’s going to be the key for all of us, but you’re right.

It is going to be emotionally tough. So kind of finding what gives you energy, finding where you can get your support, I think is going to be really, really important for all of us.

Oliver Smith
What’s coming up for me as you talk about the different personal experiences we may have with tech is even in our own firm. Like I have watched AI for a couple of years and we have found amazing ways to augment operational workflows with it and just take administrative lift off of our team, but where we really were able to imagine more in terms of your earlier point about where do we add value and how do we leverage the tech to add more value for our clients, I had to start using it, right? So like as a CEO, it really does start with us.

We have to understand what it could do at a base level before we can imagine what it can do for our business. We could pay consultants to come tell us, but I don’t think we’re rooted in the true potential of the tech unless we use it. And the latest, the later adopters I’m finding are more senior CEOs that we work with.

So if I may ask, what advice would you give our senior leaders when it comes to really getting engaged with the technology personally first?

Tegan Trovato
As you said, you’ve got to use it. Ask it to do things that you do every day and the things that you value and you need to experiment a bit. So I’m not the only person that says this.

So this is not kind of magic advice, I’m afraid, but I think it’s still true. Use it in your day-to-day work and try and use it to help you with some of your more thinking work because that’s probably as a leader, the way you feel you’re going to add most value is in your work, where you are kind of thinking things through and try and use it as a thought partner and that the extra bit of advice I would give is before you use AI, write down yourself, what you think, like just a quick sketch. What do you think the answer might be? And the importance of that is A, that you’ll kind of get your mindset into it, you’ll probably ask the AI better questions, but also is that whatever the AI gives you, if that’s the first thing you see, you will anchor to that and you won’t be able to, you will, but it’d be more difficult to think about other things.

So if you first write down, I’m thinking this through, here was a kind of very back of the envelope, I think it’s this. That will just get you into the right thinking space, but also stop you from being anchored. And that, for me, helps you make best use of the AI, but it’s only when you really start trying to use it, I think for those sorts of thinking moments that you’ll understand the value of it, where it can help, but also where you think that’s not very good at all.

Whereas if you’re just doing it, using it to create an image for presentation, that’s great. And obviously do do that, but you’re not going to see both the potential and also the pitfalls of it as well.

Oliver Smith
Yeah. I worked with a client, a CEO who was on his third year with me of preparing to present to the board, his achievements for that year and the forward focus, his personal goals for the following year. And so we had three years of qualitative data, three years of reflection that he presented to the board, three years of looking ahead, and so we decided to run it through AI and ask AI what the themes were of his accomplishments over the past three years.

And it was brilliant and it saved us hours of brain work and synthesizing. And so I’m sharing that in case senior leaders are listening, like you want to understand where your information’s going. You may need to take names and things out of your data, like let’s be smart about what we’re putting in, especially as executives, but wow, like the time savings and the high quality output just to be able to say, Hey, here’s my plan for this year, but here’s a reflection on the last three years, and it’s a one page like synthesis that was perfectly board appropriate, and it was just a value add, we didn’t need it, but it was a value add, and it was really cool that it was produced in like seconds, right?

So there’s fun ways to use it, I think, like that.

Tegan Trovato
I think it’s fantastic for things like that, where you want it to synthesize some data, to give you an example where it hasn’t been very good. For me recently, I’ve been doing some futures thinking work and kind of direction by geopolitics and kind of AI and what could happen, and it has really struggled. I’ve really struggled to kind of get good, insightful responses from that.

And this of course is partly when that kind of the fear comes in of, am I just doing it wrong? Maybe there’s some, there’s a magic prompt that if I just put this magic prompt in, it’ll get the right answer. But that isn’t my experience.

It’s also not the experience of people that I’ve also been working with on this and say, yeah, but that’s just a bit generic. And of course we knew that, but we need to kind of think of it differently. And so, so much of it I find for forward looking work is the context as well that you’re working in matters, because the AI can’t know that.

And even if you write a bit, it still won’t really encapsulate it. And so you need to really lean on what you know and what you feel and what you experience and that will help you ask the right questions. Yes, of course you can use the AI, but I find that kind of work, it gets you a kind of an okay document that you can maybe then work with, of course.

I’ve had this experience sometimes with people that are working with me and my team and you think, okay, well, probably it would be faster if I did it myself, but also I need to develop this individual and that part of their growth. And that’s kind of a joyful thing to see people growth and kind of go through that with them. I don’t really feel that with an AI because it’s just an AI, right?

I don’t feel like I need to somehow make chapter PT better for open AI. And so if it’s going to be quicker for me to do it myself, I’ll just do it myself.

Oliver Smith
Yeah, that’s fair. You know, as I’m thinking about some of the things we may ask and the outputs we may get, the question that we might ask that comes up for me, I’m curious your thought here, when we get the response from AI to ask how well did this address the human or emotional elements of the query? Simple, right?

I mean, cause it’s not going to do that. And as leaders, if we’re using it for anything leadership oriented, now if we’re using it for something data synthesizing, you’re good. If we are putting in something human or leadership oriented, even business facing, customer facing, thinking or questions, we should reevaluate the answer because we will love the intellectual answer a lot of times.

It will speak to us. And we’re like, that is so smart. That’s what I needed.

But then did we overlay the emotional wrapping that we need for the delivery of the thing? Does that make sense?

Tegan Trovato
It does. Also sparks another thought in my mind that another good practice when you’re using AI as well, is to kind of challenge back and say, well, have you thought about this way? What about this?

Because LLMs are very good at giving you a confident sounding answer. But then if you push back and say, you know, that’s not this or what about this or have you, are you sure, sometimes they’ll just give you a totally different answer or enough of a different answer that you think, well, you can’t be that confident in the first place, really. And of course you recognize that, well, it’s just a probabilistic machine.

So of course, it’s going to be like this. You also see that personalities that they’ve been kind of fine-tuned with, which is really funny because Claude is always very kind of, I’m so sorry that, you know, I’m such a flautist. Um, where it’s actually, it’s not quite the same.

So that’s funny that they’ve kind of had these personalities kind of injected into them, but it’s definitely good practice to push back and not take the first answer. And I really like your question. So have you thought about the human elements?

Because that’s something that often doesn’t come through actually in their answers, you’re right.

Oliver Smith
It’s why we are here. It’s still important as humans.

Tegan Trovato
We’re still valuable.

Oliver Smith
Yes. We must emote. Very good.

So Ollie, thank you. It’s been such a stimulating conversation. Our listeners are going to get a ton out of this.

As we are closing here, I wonder if there’s a last moment of advice that you might give knowing that we have a, you know, a lot of senior leaders listening to this episode.

Tegan Trovato
We talked a lot about transition. I think that actually is a good place to end because leaders always have transitions that they face. We’re living in interesting times, I think, which is I think what we’re supposed to wish for, I believe.

And we definitely can’t help wish it’s coming true. There’s lots of transitions happening. And I think that as leaders, we need to embrace those transitions.

And I think the leadership really is about embracing transitions and leading people through them. And I think doing that well, isn’t just intellectually going through transition. I think it’s also bringing your health up, right?

It’s also going through it emotionally. And as a leader, I think you also need to ensure that you have the support in place that you can effectively go through that emotional transition, not just the intellectual transition as well. And I think that, that for me is the real joy of leadership.

It’s actually when you, you’re going through a transition and it’s hard, but you get through and you get through with your team, not just intact, but they’ve also grown. That is the real joy of leadership.

Oliver Smith
So well said. And thank you for reminding us of the value of the human experience as we close. So we’re not going to get all that from tech.

We can get it by using tech as part of our equation, but yeah, being human and being in relationship is the value and the reward oftentimes. So thanks for reminding us of that. Ollie, thank you.

Tegan Trovato
Thank you so much. Real pleasure.

Oliver Smith
As Oliver reminded us today, leadership isn’t just about navigating change. It’s about embracing transitions with both intellectual curiosity and emotional resilience, whether it’s the rapid evolution of AI or a personal career shift. The key is to stay grounded in what you know, lean into your strengths and be intentional about how you apply new technology.

As you think about how to integrate these takeaways into your own leadership journey, ask yourself, are you experimenting with AI in a way that enhances your unique value? Are you ensuring that ethical considerations are at the core of your decision making? And most importantly, are you giving yourself the support you need as you move through transitions with confidence?

Thanks for joining us for this episode of the Life in Leadership Podcast. If you found this conversation valuable, be sure to subscribe on Apple Podcasts, Spotify, or YouTube music, and share it with a fellow leader who could benefit. We’ll see you on the next episode.


 

 

 

Life + Leadership with Tegan Trovato podcast cover

Related Posts

Share This