• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Coaching for Leaders

Leaders Aren't Born, They're Made

Login
  • Plus Membership
  • Academy
  • About
  • Contact
  • Dashboard
  • Login
Episode

674: Principles for Using AI at Work, with Ethan Mollick

Assume this is the worst AI you will ever use.
https://media.blubrry.com/coaching_for_leaders/content.blubrry.com/coaching_for_leaders/CFL674.mp3

Podcast: Download

Follow:
Apple PodcastsYouTube PodcastsSpotifyOvercastPocketcasts

Ethan Mollick: Co-Intelligence

Ethan Mollick is a professor of management at Wharton, specializing in entrepreneurship and innovation. His research has been featured in various publications, including Forbes, The New York Times, and The Wall Street Journal.

Through his writing, speaking, and teaching, Ethan has become one of the most prominent and provocative explainers of AI, focusing on the practical aspects of how these new tools for thought can transform our world. He's the author of the popular One Useful Thing Substack and also the author of the book, Co-Intelligence: Living and Working with AI*.

Whether you’ve used it or not, you’ve heard that AI will transform how we work. Given how quickly the technology is changing, how do you start and, if you’ve started already, what’s the way to use it well? In this conversation, Ethan and I discuss the principles for using AI, even as the technology changes.

Key Points

  • GPT-4 is already passing the bar examination in the 90th percentile, acing AP exams, and even passing the Certified Sommelier Examination.
  • Always invite AI to the table. It’s may be helpful, frustrating, or useless — but understanding how it works will help you appreciate how it may help or threaten you.
  • Being the “human in the loop” will help you catch where AI isn’t accurate or helpful. Zeroing in on areas where you are already an expert will help you appreciate where AI is useful and where its limitations emerge.
  • Treat AI like a person, but tell it what kind of person it is. It’s helpful to think of AI like an alien person rather than a machine.
  • Assume this is the worst AI you will ever use. Embracing that reality will help you stay open to possibilities on how you use AI do your work better.

Resources Mentioned

  • Co-Intelligence: Living and Working with AI* by Ethan Mollick

Interview Notes

Download my interview notes in PDF format (free membership required).

Related Episodes

  • How to Build an Invincible Company, with Alex Osterwalder (episode 470)
  • Doing Better Than Zero Sum-Thinking, with Renée Mauborgne (episode 641)
  • How to Begin Leading Through Continuous Change, with David Rogers (episode 649)

Discover More

Activate your free membership for full access to the entire library of interviews since 2011, searchable by topic. To accelerate your learning, uncover more inside Coaching for Leaders Plus.

Principles for Using AI at Work, with Ethan Mollick

Download

Dave Stachowiak [00:00:00]:
Whether you’ve used it or not, you’ve heard that AI will transform how we work. Given how quickly the technology is changing, how do you start? And if you’ve started already, what’s the best way to use it well? In this conversation, the principles for using AI even as the technology changes. This is Coaching for Leaders, episode 674.Production Credit: Produced by Innovate Learning, maximizing human potential.Greetings to you from Orange County, California. This is Coaching for Leaders, and I’m your host, Dave Stachowiak. Leaders aren’t born, they’re made.

Dave Stachowiak [00:00:44]:
And this weekly show helps you discover leadership wisdom through insightful conversations. One place that more and more people every day are looking for wisdom is through conversations with AI. AI, of course, has captured the news, our attention so much in the technological advances. What we have not yet figured out is how do we all work with AI? Today, a conversation, the first of many, I’m sure, on how do we begin thinking about some of the key principles at the meta level to start to be able to use AI and to do it well at work. I’m so pleased to welcome Ethan Mollick to the show. He is a professor of management at Wharton, specializing in entrepreneurship and innovation. His research has been featured in various publications, including Forbes, the New York Times, and the Wall Street Journal. Through his writing, speaking, and teaching, Ethan has become one of the most prominent and provocative explainers of AI, focusing on the practical aspects of how these new tools for thought can transform our world.

Dave Stachowiak [00:01:49]:
He’s the author of the popular One Useful Thing Substack and author of the book, Co-Intelligence: Living and Working with AI. Ethan, I came across a quote recently from Bill Gates from a while ago. “Most people overestimate what they can do in 1 year and underestimate what they can do in 10 years.” Thinking about that quote, so much has happened with AI in the last year. However, it really is, still very much at the beginning here, isn’t it?

Ethan Mollick [00:02:19]:
Yeah. I mean, I think no matter how you look at it, if you look at it from a technology perspective, as we’re recording this conversation, a new model that was that, beats the current state of the art was just released a couple hours ago. But even if we don’t take into account technology and we just think about social change, even if the technology doesn’t develop anywhere past where it is today, we probably have another 5 to 10 years of just absorbing the impact of AI before we really see its full potential.

Dave Stachowiak [00:02:46]:
Speaking of social change, the very first line you write in the introduction is this. “I believe the cost of getting to know AI, really getting to know AI, is at least 3 sleepless nights.” Tell me about that.

Ethan Mollick [00:03:01]:
So I think your audience probably divided into 2 groups of people. People who’ve had a chance to really experience AI, probably they’re using the most advanced models available, which is either GPD 4, clog 3, or Gemini advanced. They’re all have to you have to pay for all of them essentially. And they’ve been using this a lot and, kind of have a sense of what this does. Or they have used only a few hours or less than an hour on an AI system, usually one of the free versions like free chat CPT. And they kind of don’t understand what the big deal is. I think that it is very clear for most people when they use the most advanced systems for long enough, they have a moment of revelation, which is like, wow, this does a lot. Like, it does a lot of my job.

Ethan Mollick [00:03:40]:
It does a lot of the tasks that I couldn’t do before. It feels like I’m talking to a person even though I’m not. And that tends to create a sense of crisis. Right? What does it mean for us to think? What’s it mean to work with a machine? What job am I gonna do later? What job will my kids do? And that’s the 3 sleepless nights.

Dave Stachowiak [00:03:57]:
Yeah. It it is really as you get into this, it’s really stunning, like, how much has happened just in a short period of time. And I’m gonna quote one small paragraph you write in the book, which I think illustrates this just really profoundly. “GPT 4 scored in the 90th percentile on the bar examination while GPT 3.5 managed only 10 percentile. Gpt4 also excelled in advanced placement exam scoring a perfect 5 in ap calculus, physics, US history, biology, and chemistry. It even passed the certified sommelier examination, at least the written portions since there’s since there’s no AI wine tasting module yet, and the qualifying exam to become a neurosurgeon. And it maxed out every major creativity test we have.” It’s remarkable, isn’t it?

Ethan Mollick [00:04:41]:
It really is completely unexpected, including for the people who made these tools. It really is not I think we’re a lot all of us are very surprised about how much this can accomplish.

Dave Stachowiak [00:04:51]:
This is always one of the reasons I haven’t talked much about AI on the podcast yet is it’s so new and we don’t know very much about all of the technological implications. Of course, we don’t know a lot about exactly how this is gonna play out in the future. But one of the things that’s starting to emerge and why I appreciate your work so much on this is starting to think about some broad principles and how do leaders, individuals show up and utilize AI in the workplace today. And you’ve identified some really key principles that help us to be a guide on that. And one of the things that you use as an analogy, which I think is really helpful, is the jagged frontier of AI. And I’m wondering if you could share a bit of that analogy and how that’s helpful in thinking about this.

Ethan Mollick [00:05:41]:
Sure. So we did a large scale experiment at Boston Consulting Group, which is one of the big elite consulting companies with a bunch of colleagues at Harvard, MIT, University of Warwick, and elsewhere, And, with the help of BCG, where we took 8% of their global workforce and had them use AI to do realistic business tasks. And we created these 19 tasks and that, were all realistic, and the people who use them had these huge impacts in, improvements to performance. So 40% improvement in ID in quality of their work, 26% faster, 12.5% more work done. But we also created one task on purpose that the AI couldn’t do, but looked like it could do. And for people who are doing this task, they actually made more mistakes using AI because they fell asleep at the wheel is what we call it. They stopped paying attention because the AI seems right even though it wasn’t. And the problem is knowing what the AI is good or bad at is challenging.

Ethan Mollick [00:06:34]:
That’s the jagged frontier. So the AI is good at some things you wouldn’t expect and bad at some things you wouldn’t expect. So for example, it struggles to write a 25 word sentence because it doesn’t see words the way we do. It sees tokens, but it can write an amazing sonnet. So how do you do with a system that can write a great sonnet and, can’t write a 25 word sentence? The only way to figure this out is to use it enough to understand what it’s good or bad at, to understand the shape of that jagged frontier of its abilities.

Dave Stachowiak [00:06:59]:
This is a lead into one of the principles that you invite us to consider, which is to always invite AI to the table. What do you mean by that?

Ethan Mollick [00:07:11]:
So the p in GPT stands for pre trained. The AI already knows many, many, many things about the world. It’s better at, medicine than many doctors. It’s better at law than many law students. And so the question is, okay, given this set of stuff, how do you know what it can do for you? And how do you know how to make it do those things? The answer is that you need to experiment. So the only way to experiment is to bring AI to everything you legally and ethically can. Go into a meeting, have it listen in and give you feedback. Wanna generate ideas, work on ideas with it.

Ethan Mollick [00:07:41]:
Telling your kids a bedtime story, have it generate a story for you and see how it is. You don’t have to use all of its answers, but that’s how you figure out what it’s good or bad at doing.

Dave Stachowiak [00:07:49]:
You write, experimentation gives you the chance to become the best expert in the world in using AI for a task you know well. The reason for this stems from a fundamental truth about innovation. It is expensive for organizations and companies, but cheap for individuals doing their job. The nudge I’m hearing here is, like you just said, whether you use it or not, whether you integrate it in your work or not, just actually engaging the technology and starting to see, like, how does this show up for me in my daily work in my life gives you insights that then broaden your understanding of this and and give you a better seat at the table as this becomes more prolific in the work we’re doing.

Ethan Mollick [00:08:33]:
Absolutely. I think you you’ve nailed it. I mean, you’ve got a there’s no instruction manual out there. Even if you I talked to all the AI companies. No one knows exactly what these systems do and don’t do. So it really requires you to do experimentation to learn that for yourself.

Dave Stachowiak [00:08:47]:
Is there any- I’ve seen all these posts online of, like, okay. Here’s how you do the training. Here’s how the questions you ask about AI. Have you found any tools that are helpful? Like, for those who have not done this yet or not really gotten into experimenting with AI, that’s just helpful as a starting point to think about, like, okay. Where would I begin?

Ethan Mollick [00:09:09]:
So, I mean, there there’s a lot of starting points. A lot of stuff that people tell you to do is probably overcomplicated. It doesn’t always work. So a lot of the tips viral tips out there are very conditional. They don’t work all the time. They work in limited circumstances. So you might have heard, you know, offering the AI a bribe helps. It does, actually.

Ethan Mollick [00:09:29]:
Selling it you wanna pay a money, you’re gonna tip it helps, but not all the time, under some circumstances. So it’s not worth worrying about that that much. Instead, what you wanna do is start conversationaling. You wanna my second principle in the book is treat the AI like a person and tell it who it is. AI is actually not best used by coders. It’s actually best the best people prompting AI are managers, teachers, parents, people who understand how to work with a person. The AI is not a person, but it acts like 1. So tell it what kind of person it is.

Ethan Mollick [00:09:56]:
Give it context. You are an expert marketer who works at KPMG or whatever else you wanna do, and start interacting with it that way and just go back and forth like your manager, give you a feedback, and you’ll learn what it’s good or bad. And you’ll learn its personality as it were, even though it doesn’t really have a a personality the way a human would because it doesn’t think, like, it’s not alive. But it effectively has one, and that makes you much more effective as a prompter.

Dave Stachowiak [00:10:20]:
Yeah. And that’s one of the principles you invite us to think about is to treat AI like a person and then with the caveat, tell it what kind of person it is. And you point out at the book that a number of researchers are concerned about anthropomorphizing AI. What’s the concern about that, first of all, before we talk about, like, how to do it?

Ethan Mollick [00:10:41]:
Alright. So before we stand, we’ll talk about it. So there’s a few reasons. One is a philosophical reason, which is the AI is not a person. So treating like a person is is is is dubious. Right? The second reason is more practical. If you treat like a person, then you’re more likely to be taken in by the fact that, you know, by AI is pretending to be people and you may not be have as your critical thinking facilities in place, and you might start to make mistakes as a result. So there are risks associated with anthropomorphization.

Dave Stachowiak [00:11:10]:
And you write, working with AI is easiest if you think of it like an alien person rather than a human built machine. So you’re going down this rabbit hole a bit of anthropomorphizing, but you’re thinking about it through a different lens than we would think of, like, having a conversation with a person.

Ethan Mollick [00:11:30]:
Yeah. I mean, I, again, I think that the model to be very practical, if you use AI and you treat it like a person, you will get much further with it than if you try and learn a bunch of machine rules. And I’ve seen this time and time again, the best prompters are not computer scientists. They are people who figure out how to give the AI instructions and talk to it. And so even though that isn’t an accurate model of how the AI works, it’s not a person, it’s not thinking, it doesn’t know things, it doesn’t have feelings. It acts enough like it does that you might as well consider that practically the case.

Dave Stachowiak [00:12:03]:
And this comes back to what we were talking about a bit ago of, like, experimenting a bit. The fact that there’s a lot of us out here who don’t have the coding, the techno the AI knowledge as far as exactly the prompts, in some ways actually works to our benefit because, of course, these models are trained on these large language systems, and that actually can be helpful.

Ethan Mollick [00:12:27]:
I mean, they’re trained on our collective human heritage. So the more you know about the collective human heritage, the better off you are. Right? So humanities majors often do quite well talking to AI. People who work with people do quite well talking with AI because it works in that kind of way. It emulates a person.

Dave Stachowiak [00:12:44]:
Yeah. Yeah. It’s fascinating. Okay. One of the other things I am curious about and you alluded to this a bit ago of, like, having AI listen in on a meeting or listen into a bedtime story you might be telling a child. How do you think about the privacy implications of this right now? Like, what should we be thinking about when we’re inviting AI into a conversation where someone may some say something sensitive or may not want something to be out on the Internet? How do you think about that?

Ethan Mollick [00:13:12]:
Alright. So there’s technological kind of questions and ethical questions. On the ethical questions, if you’re recording anyone, you should probably let know you’re recording them, right, and get their permission to do so. So that’s a separate issue. On the issue of what AI keeps private or not, I think people are overly concerned about privacy for AI in a way they aren’t about other systems. And it makes sense initially, but there’s now a lot of privacy solutions out there. So if you have chatgbt4, you there’s a button you can push in your settings that turn your chats private. If you pay $30 a month instead of $20 a month, you get a private by default.

Ethan Mollick [00:13:48]:
If you contact Microsoft or Anthropic, they’ll happily sell you a HIPAA compliant and FERPA compliant version of AI. So the privacy tools are out there right now. So it’s not that shouldn’t be the barrier to using it because there’s lots of options for privacy. Additionally, the way the AI learns information is not like an exact copy. It’s not saving your information to a database and then recalling it for that database. It’s learning patterns. So the chance that you could pull back any specific information about you is from 1 or 2 interactions is almost nil. But the real thing is, you should absolutely consider privacy and ethics.

Ethan Mollick [00:14:22]:
You have to make some decisions about it. But there are not a lot of choices about what you want to do in those cases that make it not that hard for people to find solutions that work for them.

Dave Stachowiak [00:14:31]:
Fascinating. I’m gonna get into the chat gpt settings and start looking at that, not even realizing that was a piece. And I think that, you know, there’s there’s so much we’re all learning about this. Obviously, if we’re having, like, a a secret meeting about the most critical thing for the organization and no one’s supposed to know about it, maybe that’s not the meeting you record. Right? But there’s a lot of practical benefits to actually starting to involve this. It gets back to that invitation of bringing this to the table. And one of the other principles you cite is to be the human in the loop with this. What do you mean by being the human in the loop?

Ethan Mollick [00:15:10]:
So that’s an idea from control systems, from how we control machines that you don’t want autonomous things through making decisions without you being involved. So it comes from the military and from NASA where you wanna make sure that a chance for a person to intervene. But I think the general principle holds more widely also, which is the idea that you wanna make sure that you remain in control of something. And so as the AI gets better and better and does more and more of our work and is more and more capable, you wanna think about where do I add value. Right now, when people use AI systems and we survey them, they’re both really happy and nervous. Nervous for the reasons you might expect that their job might be replaced. Happy because the AI does their most boring work, which makes their work better. So whatever you’re best at right now, you definitely beat the AI.

Ethan Mollick [00:15:56]:
All of your listeners, whatever they’re the top they’re the top 1% in something, whatever they’re the top 1% or 10% of, they beat any AI system out there. But the AI might be able to do a lot of other parts of your job better than you do, but probably parts you don’t wanna do. So you wanna maintain being in the human loop by not just understanding where AI is going and how you’ll keep adding value, but also how do you concentrate on things you want to do rather than just doing all the parts of your job whether you could do it or not? Or how do what can you outsource to AI?

Dave Stachowiak [00:16:23]:
And you point out that hallucination is a problem with AI and that AI will often justify wrong answers when it does hallucinate, doesn’t it?

Ethan Mollick [00:16:35]:
I mean, hallucination is the idea of the AI makes stuff up. Right? It doesn’t know anything actually about the world. It is just making stuff up as it goes along. There’s no information there. Everything you’re seeing is an illusion. Right? It’s just amazing that it happens to be right as often as it is. So hallucination rates, this error rate, this confabulation rate is dropping over time. So it used to be that in an older study that the free version of chat GPT 3.5 that had a hallucinatory rate of 80% on medical citations.

Ethan Mollick [00:17:04]:
GPT 4 dropped to 20%. When you start adding web search like all the AIs currently have, it drops even further. So hallucination is less of a problem than it was. Sometimes it hallucinates much less than people. The thing that makes hallucination difficult is it’s very plausible. The answers the AI gives you feel correct, even if they’re not. And that can cause problems when you’re doing things. So you have to be aware of hallucination, which is part of why you want to start by being the expert in the room and using it for things you want that you already know.

Ethan Mollick [00:17:29]:
So you can understand when it’s working, when when it isn’t.

Dave Stachowiak [00:17:32]:
You’re right. Being good at being the human in the loop will mean that you will see the sparks of growing intelligence before others, giving you more of a chance to adapt to coming changes that people who do not then people who do not work closely with AI. This this gets back to I mean, the theme I’m really hearing this conversation is the both end of, yes, AI, yes, people together as you’re learning from the system, the system’s learning from you. But you’re also seeing the things if you’re in the conversation and starting to utilize these tools, you’re gonna see these things before other people are. You’re gonna be the one asking the questions and grappling with this and hopefully at the forefront of deciding how we all utilize these technologies going forward.

Ethan Mollick [00:18:16]:
Yeah. I really like the point you’re making because it is important to realize that we have agency over our future. Right? We tend to view AI as something that is done to us and not something that we have control over. But inside your organization, you get to decide how AI is being used, whether it’s being used to empower people or to disempower them, whether it’s being used to create thriving or not, whether it’s used to create productivity or not. So using it gets you to a place where you can make those decisions.

Dave Stachowiak [00:18:42]:
One of the other key principles you cite, “assume this is the worst AI you will ever use.” And you write, “by embracing this principle, you can view AI’s limitations as transient, and to remain open to new developments will help you adapt to change, embrace new technologies, and remain competitive in a fast paced business landscape driven by exponential advances in AI.” You said a bit ago that even assuming that the technology doesn’t advance, there’s so much for us still to just to unpack just what’s already happened. But this is gonna get better, isn’t it? I mean, we’ve just seen it in the last year.

Ethan Mollick [00:19:19]:
Yeah. I think there’s every sign that this is going to get better. Like, I don’t see I know enough about what’s happening in the world to think that this is not stopping anytime soon, so I think we need to be ready. And I think if people are kind of saying, things are not progressing, they’re just not paying attention.

Dave Stachowiak [00:19:33]:
For the person listening who has heard all the news, they’ve seen the headlines, but they haven’t really started with AI yet. Or maybe they’ve put in a prompt here or there on chat gpt when it came out a year ago and, but they haven’t really done much more with it. What’s a first step for someone who’s in that situation where they’ve they’ve dabbled a little bit that would be a good push to getting toward a place where they’re thinking about this as a both end.

Ethan Mollick [00:20:02]:
I mean, that’s where I’m back to giving the advice again that I just would strongly recommend starting with AI for things you know well and pushing it by using that treat like a person principle. So I think you start with it with work. And what you legally and ethically can do at work, that’s where you get the best sense of what it can do or not. Like, you have to use it in areas of your expertise, not outside of it.

Dave Stachowiak [00:20:24]:
That’s a key distinction. The things you know really well, that’s the questions to ask.

Ethan Mollick [00:20:31]:
Yeah. Because then you can decide how good it is, and you can come back and see how far it’s getting and whether it’s getting closer to, you know, your level. You could just figure out what it’s good for, what’s not bad for you. It’s very cheap as we pointed out earlier to experiment in an area that you know well. It’s very expensive to experiment in an area that you’re not. If I’m trying to create recipes and I’m not a good cook with the AI, I can’t look at the recipe and say, this is a good recipe or not. So I could be easily diluted. But if I’m asking it for business plans, I’m a professor of entrepreneurship, I could tell instantaneously whether or not it’s heading in the right direction or not.

Ethan Mollick [00:21:03]:
Right? So it helps to actually use it where you know least and then go back and forth, say, actually, that was wrong. Improve section 3. Do this, and then you’ll see a lot better, results.

Dave Stachowiak [00:21:13]:
As we’ve mentioned, there’s so much that’s changing on this undoubtedly while many more conversations on the show in the future on AI and utilizing AI. But I am curious, Ethan, as you’ve written this book, as you’ve been studying this, in the last, say, 6 months as you’ve been finalizing the book, as you’ve been working with your students and organizations, what’s one thing you’ve changed your mind on in relation to AI?

Ethan Mollick [00:21:42]:
I thought that people would be freaking out more by now, to be honest. Like, for example, AI does all of our homework. I mean, all of it. Like, I teach at Ivy League School, and there is almost no assignment that that, GPT 4 can’t solve right now. And I thought that would radically change how we approach homework because all homework becomes invalid. It has not yet. People are still sort of treating it like this is something that is, you know, nothing to worry about right now. There’s time to consider it.

Ethan Mollick [00:22:11]:
I’ve seen the same thing in organizations. I’m shocked by the number of organizations I go to where people are really just they’re listening to podcasts. They’re, you know, have they’re reading, you know, hopefully reading a book, but they’re they’re not trying stuff. And I think I think people have to realize there’s some urgency here to get this right because other people are experimenting. And by the way, the gbt 4 model, the best available model in the world is available for free to people in a 169 countries around the world through Bing. So every business in Mozambique and Sri Lanka and Uganda has access to the same model that Goldman Sachs has access to. And I think we need to be thinking about that in a much more innovative way rather than just passively.

Dave Stachowiak [00:22:49]:
Ethan Mollick is the author of Co-Intelligence: Living and Working with AI. Ethan, thank you so much for your time and for your work.

Ethan Mollick [00:22:58]:
Thank you for having me.

Dave Stachowiak [00:23:05]:
Of course, there will be more coming on AI here on the podcast in the coming months years. And as the technology is changing quickly, it’s helpful for us to remember, how do we approach innovation and change well. Three episodes that are good reminders for all of us on how to be able to frame this well. One of them is episode 470, how to build an invincible company. Alex Osterwalder was my guest on that episode, one of the key leaders at Strategizer. Their firm has done incredible work over the years to help leaders, entrepreneurs do a better job with strategy and big picture thinking and, of course, innovation. In that conversation, Alex and I looked at some of the myths of innovation, the common myths many of us have heard. I’ve bought into some of those myths in the past too.

Dave Stachowiak [00:23:51]:
An overview of how we can do a bit better when change is happening to be able to think what is the next thing and actually move on at episode 470 for that. I’d also recommend episode 641, doing better than 0 some thinking. Renee Mauborgne was my guest on that episode. We talked about the iconic book, Blue Ocean Strategy, and also the concept of nondisruptive creation and how we can do better than just thinking 0 sum. And that is the tendency we often go to when we’re thinking about new technology, something that may replace work as we think about, well, it has to be either that or it has to be me or there’s no room for anything else. And Renee really nudges us in that conversation to think about how we can actually get way beyond that. So many opportunities today to do that and to do it well. Episode 641 for that.

Dave Stachowiak [00:24:43]:
And then I’d also recommend episode 649, how to begin leading through continuous change. David Rogers, my guest on that episode, we talked about his work on digital transformation and the incredible research he’s been doing on that. And the broader challenge that so many of us are facing as leaders now, which is change used to be an event years ago when I first learned about the scholarship of change. Now it’s continuous. So many of us are leading change continuously in our organizations. Many change efforts happening at once. Of course, technology continues to drive that too. Episode 649 is a framework for how to lead during continuous change.

Dave Stachowiak [00:25:26]:
David and I really went into that in-depth. All of those episodes, of course, you can find under the coaching for leaders website inside of our free membership. And if you haven’t set up a free membership, I’m inviting you to do so now because it’s gonna open up doors to a whole lot more. All of the episodes that I’ve aired since 2011 are all freely available on all the public apps and directories for podcasts. However, what you can’t do is search easily by topic on many of the apps and directories. And so we’ve made it easier for you within the website to be able to find exactly what you need, whether it is under technology or strategy or coaching or whatever is important for you right now. And the best way to access that is just go over to coachingforleaders.com, set up your free membership. It’s gonna give you the ability to be able to search by topic.

Dave Stachowiak [00:26:13]:
The other thing that it’s gonna give you is benefit to all of my weekly guides. Once a week, I set out a weekly guide to you on email. It has links to all the recent episodes, of course, the resources we’ve mentioned in conversations, the related episodes I just mentioned, and also the things I found in the news over the week that I think could be useful to you, articles from the Wall Street Journal and Harvard Business Review and New York Times, the things I think you should be reading that’ll help continue to drive your leadership development forward. If that’s important to you, go to coachingforleaders.com, set up your free membership. And if you’ve been a free member for a bit, I hope you’ll consider Coaching for Leaders Plus it opens up a bunch more. One of the benefits it opens up is a weekly journal entry from me. I am often responding to a question or sharing more perspective on something via email that’ll be helpful to you. 1 of our members asked me recently, how do I do onboarding well? I put together 3 frameworks we have talked about on the podcast over the years that I think almost everyone should be doing in onboarding and following step by step.

Dave Stachowiak [00:27:19]:
I talked about that in detail in one of our recent journal entries. It’s available inside of Coaching for Leaders Plus as is every single journal entry that comes each week, plus a whole bunch more. If you’d like to find out more, just go over to coachingforleaders.plus. Coaching for Leaders is edited by Andrew Kroeger. Production support is provided by Sierra Priest. Next week, I’m glad to welcome Lauren Wesley Wilson to the show. We’re gonna be having a conversation about how to be a better ally. Join me for that conversation with Lauren.

Dave Stachowiak [00:27:51]:
Have a great week and see you back on Monday.

Topic Areas:AIStrategyTechnology
cover-art

Coaching for Leaders Podcast

This Monday show helps you discover leadership wisdom through insightful conversations. Independently produced weekly since 2011, Dave Stachowiak brings perspective from a thriving, global leadership academy of managers, executives, and business owners, plus more than 15 years of leadership at Dale Carnegie.

Listen Now OnApple Podcasts
  • More Options
    • YouTube Podcasts
    • Spotify
    • Overcast

Activate Your Free Membership Today

Access our entire library of Coaching for Leaders episodes from 2011, searchable by topic.
Listen to the exclusive Coaching for Leaders MemberCast with bonus content available only to members.
Start Dave’s free audio course, 10 Ways to Empower the People You Lead.
Download our weekly leadership guide, including podcast notes and advice from our expert guests.

... and much more inside the membership!

Activate Your Free Membership
IMAGE
Copyright © 2025 · Innovate Learning, LLC
  • Plus Membership
  • Academy
  • About
  • Contact
  • Dashboard
×

Log in

 
 
Forgot Password

Not yet a member?

Activate your free membership today.

Register For Free
×

Register for Free Membership

Access our entire library of Coaching for Leaders episodes from 2011, searchable by topic.
Listen to the exclusive Coaching for Leaders MemberCast with bonus content available only to members.
Start Dave’s free audio course, 10 Ways to Empower the People You Lead.
Download our weekly leadership guide, including podcast notes and advice from our expert guests.

... and much more inside the membership!

Price:
Free
First Name Required
Last Name Required
Invalid Username
Invalid Email
Invalid Password
Password Confirmation Doesn't Match
Password Strength  Password must be "Medium" or stronger
 
Loading... Please fix the errors above