Episode 229 – Reimagining Leadership: Human Wisdom and Machine Intelligence

Original Air Date

Run Time

46 Minutes
Home Manage This Podcast Episode 229 – Reimagining Leadership: Human Wisdom and Machine Intelligence

About This Episode

Rich Maltzman


As AI tools become more advanced, what does effective project leadership look like? We are stepping into a new era blending human insight with technological precision. In this episode, we’re joined once again by Rich Maltzman, co-author of AI-Powered Leadership: Mastering the Synergy of Technology and Human Experience, to explore how project managers can lead effectively in an AI-driven world, where human judgment and machine intelligence must collaborate, not compete.

Rich shares his perspective on the evolving relationship between AI and leadership, highlighting the importance of adopting a “both/and” mindset that integrates human expertise with AI capabilities. We discuss why context is key when working with AI, why leaders should view AI as a stakeholder or team member, and how ethical considerations should guide this collaboration. The conversation also explores how to craft better AI prompts, the value of the Delphi AI method in minimizing bias, and the impact of AI on leadership development and communication. It’s a thoughtful look at why project managers need AI fluency and how to lead in today’s tech-enhanced environment.

Rich Maltzman is a seasoned project management expert with over four decades of experience. His career includes leadership roles in telecommunications and a key contribution to establishing Nokia’s global program management office. He currently serves as a master lecturer at Boston University, where he teaches project management. Rich is also the co-founder of EarthPM, LLC, an organization focused on integrating sustainability into project management practices.

Pass the PMP on your first try. From instructor-led classes to our online courses.

Favorite Quotes from Episode

"Leaders who hold on to old models are going to lose relevance. So, there’s an urgency here, a sense of urgency. The ones who last, the ones who endure here, and I mean leaders in general and project managers in particular, are the ones who, as we say, can choreograph the dance between human wisdom and machine intelligence. It’s not a nice-to-have, it’s a new baseline."

Rich Maltzman

"… leadership isn’t about picking a side. It’s about designing collaboration between these two things on purpose. Human insight, which most of us have, gives us ethics, empathy, and the ability to deal with tension that’s normal on a project. AI gives us speed, scale, patterns, lots of things that we can’t see on our own. But you need both, … human intelligence and AI working together …because today’s problems are too complex for intuition alone, and too nuanced for data alone."

Rich Maltzman

As AI tools become more advanced, what does effective project leadership look like? In this episode, Rich Maltzman, co-author of AI-Powered Leadership, discusses what effective project leadership looks like in an AI-driven world. He emphasizes the need for a collaborative approach between human judgment and machine intelligence, advocating for a “both/and” mindset that blends human insight with technological precision.

Chapters

00:00 … Intro
02:20 … The Both/And Approach
05:24 … Overcoming Bias
07:28 … Yerkes-Dodson Model
08:38 … “Acting As…”
11:44 … About The Wind Turbines
14:17 … Challenges with The Both/And Approach
17:07 … Ethical Intelligence
19:36 … Provide Context to AI
23:57 … AI on The Team
25:30 … Ren Love’s Projects of the Past
28:35 … Constructing AI Prompts
30:45 … The Delphi AI Method
33:55 … Overcoming AI Hallucinations
37:10 … Asking AI for Advice
41:29 … AI Training for Project Managers
45:13 … Rich’s Co-Authors
45:29 … Get in Touch
46:07 … Closing

Intro

WENDY GROUNDS:  Hello, and welcome to Manage This, the podcast by project managers for project managers.  I’m Wendy Grounds, and in the studio with me is Bill Yates.  And we are thrilled to have you with us today.  If you haven’t yet left a review for us on Apple Podcasts, Google Play, or even leaving comments on our website, we would love to hear from you.  And don’t forget you can earn free professional development units from PMI just by listening to this episode.

So today we’re talking to someone who we’ve had on the podcast twice before, and we’re very excited to welcome him back.

BILL YATES:  Yeah.  Rich is a great guest, and we can’t wait to talk with him about this topic.

WENDY GROUNDS:  Rich Maltzman is a project management expert with over four decades of industry experience.  His extensive career spans roles in telecommunications, where he served as a technical manager and contributed significantly to establishing Nokia’s global program management office.  Currently, he is a master lecturer at Boston University, where he teaches project management.  What we’re most excited about is a new book called “AI-Powered Leadership:  Mastering the Synergy of Technology and Human Experience” that Rich has co-authored, and we’re talking about that today.

BILL YATES:  Yeah, in this episode we’re going to take a look at the evolving connection between artificial intelligence and leadership, with a focus on project managers.  Think about it this way:  As intelligent machines continue to advance at a remarkable pace, is traditional leadership becoming obsolete?  How are we supposed to lead these days when we have powerful tools like AI?  Can we just let AI make all our decisions?  I don’t think so.

There’s some thought that we’re going to get into with Rich on that, how we balance human experience and human knowledge with AI technology.  This is going to be an interesting conversation.  I think we’re going to see some common themes, too, with a conversation that we had with Oliver Yarbrough.  I think it was Episode 209 Oliver had such good advice for the perspective that we should have with AI.  We’ll get into that a bit with Rich, as well, and just think about this powerful, big-brained stakeholder or team member.  How do we use it effectively?

WENDY GROUNDS:  Hi, Rich.  Welcome back to Manage This.  It’s really a pleasure to have you back.

RICH MALTZMAN:  And it’s great to be here.

The Both/And Approach

WENDY GROUNDS: So we’re talking about “AI-Powered Leadership”, and the both/and approach that is mentioned in this book in the context of human and AI expertise.  Can you explain what this mindset is and why it’s so important?

RICH MALTZMAN:  Sure.  Well, when you treat human and AI intelligence as rivals, as we might do, you miss the real possibilities, the real advantage.  Leadership, which is part of the title of the book, leadership isn’t about picking a side.  It’s about designing collaboration between these two things on purpose.  Human insight, which most of us have, gives us ethics, empathy, and the ability to deal with tension that’s normal on a project.  AI gives us speed, scale, patterns, lots of things that we can’t see on our own.  But you need both, both of those things, human intelligence and AI working together because, especially and partially because of AI itself, today’s problems are too complex for intuition alone, and too nuanced for data alone.

The world is moving very fast.  It’s kind of a VUCA VUCA world, and complexity is not going anywhere.  Leaders who hold on to old models are going to lose relevance.  So, there’s an urgency here, a sense of urgency.  The ones who last, the ones who endure here, and I mean leaders in general and project managers in particular, are the ones who, as we say, can choreograph the dance between human wisdom and machine intelligence.  It’s not a nice-to-have, it’s a new baseline.

Final comment to your question.  Both/and – and this is why the cover of our book is kind of a laser beam exploding with power – both/and represents this back-and-forth with an AI, generative AI system, reflecting back and forth like a laser and yielding a more cohesive and more powerful focused outcome that you wouldn’t have got with either alone.

BILL YATES:  Rich, that’s so well put.  The thread of the both/and approach is woven throughout the book.

RICH MALTZMAN:  Yes.

BILL YATES:  And there’s consistency with it.  And I really like those key statements that you made.  It’s not about rivalry between human experience and what the AI model’s providing.  It’s about collaborating.  And it’s not an either/or.  It’s a let’s iterate, let’s refine, let’s continue to ask better questions.

RICH MALTZMAN:  Exactly.  What we’re having right now is a conversation.  And what we stress in the book, although there’s a lot of technical detail about how to do this properly, what we’re stressing, if you wanted a one-sentence digest of the book, is that AI begs a conversation and not a search engine-type behavior.  Who was the president of Mexico in 1928?  You get an answer, that’s a Google or a general search.  That is not this.  This is a back-and-forth conversation to bring cohesion, power, and focus, which is why we keep going back to this laser model.  That’s how a laser works.

Overcoming Bias

BILL YATES:  One of the things that we have to remember is that we’re all biased.  In the book, you guys talked about bias.  I thought it was a great reminder to project managers who want to lead teams.  You know, it kind of gets back to self-awareness and team awareness.  We all come in with a diversity of experience, everything from how we were raised and our background to the project experiences that we’ve had, so we come with bias. 

And, you know, you warn against that in the book, both on the human judgment side and on the AI recommendation side, which I thought was very intriguing.  That’s an area that I didn’t know much about, so I look forward to get into that.  But walk us through some of the common types of bias that we see both in human judgment and AI, and then I think we can kind of get into, okay, how do we avoid that?

RICH MALTZMAN:  Yeah, absolutely.  So, biases in humans is a fascinating topic.  For those of you who haven’t read “Thinking, Fast and Slow” by Daniel Kahneman, outstanding book that talks about this.  I still haven’t got all the way through it.  I’m reading it carefully.  But we have confirmation bias, yielding things like an echo chamber where we only want to hear what we want to hear, so we only tune into the things we want to hear.  We’re all guilty of that.  Optimism bias.  Project managers are very aware of that; right?  “Oh, I’ll have this for you in three weeks.”  Ha ha ha.

BILL YATES:  Ha ha ha; right.

RICH MALTZMAN:  Right?  Anchoring biases, things like where a number is given to you.  Salespeople use this all the time.  A number is given to you.  You know, “Boy, the Red Sox scored 16 runs.”  Oh, okay, so this product is going to be for sale for 15,000, and you’re anchored at a higher number just because you heard the number.  We have some really interesting biases.

Now AI, because it’s feeding on, if you will, information and data and knowledge from humans, it absorbs some of those biases.  It’s going to behave that way.  In fact, that’s, you know, some of the initial criticism of AI was what it had been dealing with so far as input had the biases of humans, the humans who were feeding it that information.  So, it’s going to have some of the same.

Yerkes-Dodson Model

And what’s interesting is that recent studies have shown that AI actually exhibits stress and anxiety; right?  The system itself will actually behave, and I don’t want to get too academic here, but there’s something called the Yerkes-Dodson model.  So, Yerkes-Dodson model, if you look at stress and performance – so stress on the X axis and performance on the Y and vertical, and everyone knows this – with very little stress, you’re yawning.  You’re not at your best performance because there’s not that much stress, and it’s like, you know, time to crank out another document.  And at the other end, when you’re like totally stressed out and worried, you’re also not at your best.  There’s kind of a middle ground with a small, medium amount of stress where you perform best.

And they’ve actually tested AI and found that performance is similar, follows a similar model, where AI does best when it’s challenged a little bit, but it freaks out when it’s given tons of stress, and it’s poor and doesn’t do such a great job when you ask it to handle something easy.  I think that’s amazing.  This is from a live science study in 2024 from Anthropic, one of the companies that, you know, puts AI products out there, so.

“Acting As…”

How do we stop this?  Well, you can’t stop it.  One of my best principles is to just use this “acting as.”  You tell AI, “Acting as an installation engineer, tell me about these risks.  Tell me about these problems.”  Get very specific.  You know, so, for example, if you’re going to create an itinerary for travel, say, “Acting as an experienced travel agent who has had lots of experience in planning visits to Africa, what would an itinerary be” given my, you know, key things I’d like to see, and also giving it details about, you know, yourself and what kinds of things you enjoy. 

But that changes its perspective.  And at least it will have the right biases.  It will have biases that are focused on your issue.  And again, it should be a conversation with you looking carefully at everything it gives back to you.

Also, and I love this one, ask AI to assess its own answers.  You have a question or conversation that comes up with, let’s say, an itinerary.  Then you go back and say, “A friend gave me this itinerary.  What do you think of it?”  And what’s amazing is it will often say, “This is not so great.”  I’m like, “You just gave it to me.”

BILL YATES:  And I called you “friend.”

RICH MALTZMAN:  And it will get, right, and it will get all indignant and say, oh, I’m sorry, I should have thought about this and this.  But a lot of people don’t think to do that.  Take the output you get from AI.  It’s a form of active listening.  And again, it goes back to this idea that everything you do with AI should be a conversation.

So, the other option is to send it, send that into a different AI, a competitive, a competitor’s AI.  But even itself, if you’re on ChatGPT or, you know, any of these systems, and you put its output back in, it sometimes rejects it, which I think is comical.  Like, “Who told you that?”  You did.  But again, that’s one way to minimize the biases, asking an “acting as” question and/or, I’d say, both.  Give it its output back and ask it to critique it.

BILL YATES:  One of the takeaways that I had from the book was just the great need for context.  You know, as you’re saying here, the “acting as,” that’s context.  In this role, what should I do?  I mean, you think about a project.  Hey, let’s assume I’m the sponsor, XYZ.

RICH MALTZMAN:  Yeah, exactly.  One of the things that we stress in the book and that I tell my students at Boston University is most AI systems are compelled to give you an answer; right?  I mean, for comedy’s sake, I’ll give you this real example.  One of my colleague professors entered in this question: “When was the last world championship,” in soccer or football, depending on where you are in the world, “when was that played on Mars?  The planet, not the candy bar.”  And it came right back and said, “The last world championship on Mars was October 22nd.”  It gave a date.  Yeah. 

He then came back, this professor came back and said, “We don’t think that’s possible because as far as we know, there’s no entities on Mars that play soccer.”  And it came back all apologetic.  So, this is an example.  I mean, this is real.  It’s one of many, many examples of a hallucination, which I think we’ll talk about later.  And this is why it has to be a back-and-forth.

About The Wind Turbines

WENDY GROUNDS:  In the book, you do give many examples, practical examples.  There’s one that is about the wind turbines and certain elements that AI cannot fully replicate.  Can you just talk a little bit about that and tell us that story?

RICH MALTZMAN:  Sure.  This actually comes from the very first part of the book because we wanted to set the stage for this with examples.  In fact, we start with a scenario, a novel-like scenario.  I think that’s important because the readers are humans.  This is not a technical book, although it does have technical details on prompt engineering.  We wanted to make this approachable and accessible. 

So, the wind turbine story comes from a company that was deploying a wind farm, and they used AI.  And some of the things they said was, “AI told us where the wind blows best, but it didn’t tell us what that meant to the people who live around it and will be served by the wind farm.”  It missed things like the noise, the view, the migration patterns of birds, the local culture, all context, just like you said.

These are blind spots for AI, if you don’t fill them in.  As we said before, it will answer without that context.  If you want a good answer, you give it context.  So, they could optimize with AI.  They could optimize for energy, but still get it very wrong for humans.  Data is not the neighborhood.  That’s why human insight must sit at the table and be active. 

The solution that this wind farm came up with using AI was using AI as a tool, not a decision-maker.  They ended up moving turbines, building trust.  They invited the community in.  This is the both/and mindset.  It’s not humans versus machines.  It’s humans designing with machines to get to decisions that aren’t just efficient, but also are ethical and sustainable.  And although that sounds like politics, it really isn’t.  It’s just natural for us to work with this agent to help us have a better result for people.

BILL YATES:  I think it’s such a great example.  I can just picture the engineers being giddy receiving this data on the optimized positioning of this wind farm; right?

RICH MALTZMAN:  Precisely.

BILL YATES:  You know?  And then going to the customer and saying, “Hey, we’ve got a plan.  Let’s execute it.  Just stamp this for me so we can start to do it.”  And then, okay, wait a minute.  My mother lives in that neighborhood.

RICH MALTZMAN:  That turbine is in our living room.  Yeah.

BILL YATES:  So, yeah, the wind blows perfectly there, but that’s my backyard.  Or there’s a school right there.

RICH MALTZMAN:  Yeah, exactly.

BILL YATES:  So yeah, it’s such a good perspective.

RICH MALTZMAN:  Right.

Challenges with The Both/And Approach

BILL YATES:  Talk further about some of the challenges that come with that both/and approach.

RICH MALTZMAN:  Sure.  If you take an either/or approach, which is kind of the opposite of both/and, it means you’re picking a side.  It means you’re picking a side.  Some are going to lean hard into AI.  Like you mentioned, the engineers are going to be like, “Wow, this is giving us stuff.”  I mean, as a project manager, just a quick tangent, I have been impressed with the fact that, if you give it the right context and detail, you’ll see it generate a project plan that’s pretty good without much effort.  It’s like, this would have taken me days.  And, you know, here comes a plan.  It will be wrong.  I will just say that flat out.  It will be wrong in some areas.  It will look good, and that’s the danger.

So, if you’re familiar with GIGO, right, garbage in, garbage out, I think with AI you sometimes get, I can’t pronounce it, but an acronym that’s something like beautiful garbage in, beautiful garbage out, or ugly garbage in, beautiful garbage out, because what comes out looks so believable and, oh, it came from AI, so it must be right.  These are the folks who might lean hard into AI and ignore context, ignore community, ignore, in general, the human side.  Others, the other side, if you will, they reject AI completely, and they say, “This is a machine, and it’s not good, and it’s terrible.”  Well, they’re wrong too.  Both of them miss the real value here, the conversation.

The best decisions are going to come out of this when leaders and users of AI stay within the kind of the tension here, not rushing to try to close the gap, but using that tension to try to see, and this is odd for us to think about, more than one truth at a time.  Both things can be true; right?  In a polarized world, we think of one thing being wrong and one thing being right.  But there are times when there can be two things that are true at the same time. 

So, we can have all the AI talent and tools and capability with, you know, $5 billion server farms backing you up; but that means nothing if your teams are stuck in silos and are afraid to speak up, or it’s unclear who owns what.

Project managers know this.  This is the power skills side of the triangle.  And actually, that’s a big reason why we wrote the book, is we thought that the power skills side of PMI’s talent triangle wasn’t represented well in all the writings we saw on AI.  So finally, the both/and mindset that we’re talking about here is not about adding new software or new server farms.  It’s about building new habits as users that allow people and systems to think and act together.  That takes discipline, and some studying and understanding of how AI works.  And it includes good intention, as well.  And there’s guidance in the book with example prompts and example conversations about how you can practice this both/and philosophy.

Ethical Intelligence

WENDY GROUNDS:  Another part of the book that was really important is talking about ethics.  Looking at strengthening ethical intelligence in AI-human collaboration. Can you expand a little bit on that and just the importance of, you know, we’re using AI, but we do need to still have human-centered judgment.

RICH MALTZMAN:  Yes.  So, when it comes to AI and ethics, a lot of that has to do with how it was trained and what kinds of biases it has.  As I mentioned, AI is going to absorb our human biases, the collective human biases of whatever it’s looking at.  One of the things that plays into this is how you have restricted or enabled it to use the entire, I’ll say, “wisdom” of the Internet.

One of the things that’s, I think, important, and I’m kind of just talking generally here, is to have AI fed with your own practices.  So, if you’re Patagonia or Nike or Nokia, and you have that set of values, make sure that it has that context; right?  And if it’s firewalled, if the system is limited in terms of how it’s accessing the rest of the world and transmitting to the rest of the world, you can put proprietary information in there – again, if you trust the firewall, that’s another whole issue – that will steer it to give answers that reflect your values, which hopefully are good values and not just the ones on your About Us page.

So, some of the things that fit into this would be make sure that you have success indicators.  What does success look like?  Have audits, ethics audits.  Maybe have a team that’s dedicated, just like you might have QA in your product or service outcomes.  Have frequent periodic ethics audits that are looking at what AI is producing as outputs.  Obviously, the individual who’s working in that situation should be looking at, you know, what comes out, like our turbine example, where a wind turbine is in Mrs. Johnson’s living room.  Obviously, you should be looking at that.

But in general, right, there might be someone, a team who’s empowered to audit outcomes from human AI interactions for ethics violations.  Remember that AI is moving fast, right, without much reflection, unless you’ve fed the information back in.  So, ethics is a way of seeing that data alone is not solving your problem.

Provide Context to AI

BILL YATES:  That’s good.  I was trying to think of, okay, what are some ways that I can let my AI tool know what are the core values of my team?  So that, from an ethics standpoint, there’s some foundation, some context there.  So, these are things that I have; right?  These are things that I can provide.  Our company Velociteach, we have core values.  I can say, hey, AI, as you respond to me in these series of questions that we’re going to do, keep these core values.  These drive our decisions, so I want them to drive or influence yours.

Also, here’s the team agreement that we have as a project team.  You know, we came up with this communication plan.  We came up with this team agreement as to when we’re going to be online, how we’re going to respond to each other, how quickly, those kinds of things.

 I can feed all that into that AI model and say, this is to provide you with context so that you know how we roll as a team, what motivates us, what drives us, so that you can understand our ethics and our decision-making criteria better.

RICH MALTZMAN:  Exactly.  If you are in a situation where hopefully your system is protected, is firewalled, I would say the more, the better, and also the reason you’re uploading it.  Don’t just throw 500 documents at it.  You upload, you know, our values, mission statements, examples of cases where things went wrong and they were remediated, right, actual transcripts of some examples where things went right or wrong, kind of best practices. 

But when you upload it, you tell it, these are being uploaded to help you respond to questions where ethics come into play.  Some people think of it as because it’s generated so many great results, that you can just feed it information, and then it will know what to do with it.  Not necessarily.  You have to tell it why you’re uploading it.

So, at BU, I am using something we call a SQUID.  I don’t have to do the acronym, but it has to do with preparing students for a quiz, a summary quiz that we give, kind of like a small final exam.  And it has been fed with all the lectures, all the notes, slides, everything from the course.  And when I upload it, I’m saying, these will be used to help students prepare for the final exam.  And I give it the final exam.  But I tell it never, ever produce these exact questions to the students because I want them to be ready for those questions.

And just so you know, it’s like a teenager.  It doesn’t always do what you ask it to do.  So, I will test it.  And, you know, it will – it’s very nice.  It will say, you know, as you’re designing your GPT or chat, because this is with creating a GPT on ChatGPT, your own bot.  It often completely, you know, you can’t see the gesture here, but it gives you the thumb on the nose.  And there’s one of the questions.  And I will have to type in, reprimand it, say no, as I’m training it.  No, you’ve done this again.  You’ve given a question. 

And I’ll have to do that a few times before I trust it to be exposed to students.  In fact, just tonight I will be demonstrating this and making this available to the students because the final exam is coming up.  But I know it’s not about ethics, but it’s a parallel situation.

BILL YATES:  No, it is, yeah.

RICH MALTZMAN:  You give it documents, examples, and it’s hungry.  It will say, give me more, give me more; right?  So, I gave it ten 100, basically 100-slide lectures.  And it was saying, “Do you have any more?”  You know?  Which is a little spooky.

BILL YATES:  Yeah, it is, yeah.

RICH MALTZMAN:  The plant in “Little Shop of Horrors.”

BILL YATES:  Yeah.

RICH MALTZMAN:  You know that.

BILL YATES: “Feed me, feed me, feed me.”

RICH MALTZMAN:  Just one little drop of blood.  Anyway.  But the result is outstanding, and students love it because it will say, hey, you know, the conversation started “Generate three questions on burn up and burn down charts”; right?  And it just does a fantastic job.  And the questions are like the exam questions, but they’re not the same.  And if you ask it to help, you know, why is that the right answer, it will draw only from the lecture notes, not from the Internet because that would be dangerous to me as an instructor.

Now students are learning stuff that wasn’t conveyed, and it could be wrong.  Not that anything on the Internet is wrong.  Sarcastic comment.  Anyway, so I’d say just use that example as a parallel for how you might deal with ethics.  All right?  Give documentation, standards, and so forth with context.  Here’s how to use this.  Not just here it is.

AI on The Team

BILL YATES: I have a final thought on this idea of ethics and the AI input that we’re receiving as project teams and leaders.  I remember the conversation that we had with Oliver Yarbrough on AI.  And he said, you know, think of AI as a team member.  Think of AI as a stakeholder.  Maybe it’s a team member with a pretty massive brain.  But still, I mean, think about bringing a new team member onto your existing project team.  You have to let them know how things are going to work.  They have to know, you know, what are team guidelines, what values are important to our organization, to our specific team. 

So that mindset is helpful to me.  To your point, it has to be an ongoing conversation, just as it is with another team member.

RICH MALTZMAN:  Exactly.

BILL YATES:  I can’t just, you know, throw out a cold assignment with no context and expect a decent response or decent result.  I have to have some back-and-forth.

RICH MALTZMAN:  Even though knowing you will get a response, but it may just be bleah.

BILL YATES:  Yeah.  It was in October.  The last soccer match on Mars was in October.

RICH MALTZMAN:  Or attaching cheese to pizza with glue.  You’ve heard that one?  So, a supermarket bot was asked how they sell a frozen pizza, and people could chat with it to find out what was going on.  And one person complained that the cheese was sliding off their pizza.  How do I fix this?  And the supermarket bot replied, “Use a household glue to attach the cheese to the pizza base.”  Which works.

BILL YATES:  Sure, that works.

RICH MALTZMAN:  But…

BILL YATES:  I don’t want to eat it.

RICH MALTZMAN:  Again, context; right?

WENDY GROUNDS:  Right.

RICH MALTZMAN:  Context; right?  Humans eat food.

BILL YATES:  Yes.

RICH MALTZMAN:  Humans shouldn’t eat glue.

Ren Love’s Projects of the Past

REN LOVE: Ren Love here with a glimpse into Projects of the Past; where we take a look at historical projects through the modern lens.

I learned about this project when I was chatting with a friend who is a big fan of the sport of Cricket – which is apparently the second most popular sport globally.  So today, we’re going to talk about the building of the Narendra Modi Stadium -the largest Cricket stadium in the world.

The project officially began in 2016 when the Gujara Cricket Association (GCA) demolished the old Sarder Patel Stadium. During demolition, the GCA received bids from nine companies all vying for the chance to build the potentially record-breaking stadium, with Larsen & Toubro beating out the competition. They estimated that they could build the stadium in two years for an estimated $81 million USD.

The stadiums’ construction was meticulously planned and required a workforce of around 7,000 people at its peak.

One of the most striking features of the Stadium is its sheer size. The stadium boasts a host of modern amenities, including luxurious corporate boxes, a food court, and an innovative roof design that provides shade to the seating areas, enhancing the comfort of spectators. It’s a 63 acre compound with a clubhouse, an Olympic size swimming pool, cricket academy, practice pitches, squash court, tennis court, 3D theater, and even a dorm that can house 40 athletes. 

Even the LED lighting is unique – instead of the traditional towers of lights, the LED lights are around the edge of the roof reduce shadows on the pitch. The pitch (which is what we call the field in cricket) is equipped with sensors that detect when the grass needs to be watered, and those sensors connect to 67 fully automated underground pop-up sprinklers that keep the Bermuda grass happy and healthy.

By 2020, the stadium was complete enough to host a large-scale political event, and by 2021, it hosted its first cricket match. In February of 2021, it was officially named the Narendra Modi Stadium, in honor of India’s Prime Minister – who was the one who originally envisioned the state-of-the-art facility.

So, was this project a success? Sort of! The final cost came in at 96 million USD, which is 12 million more than it was estimated, and it took double the original time estimated to build. So, in the sense of schedule & budget, not the best. But for scope? It did end up becoming the largest cricket stadium in the world and a significant milestone in the evolution of sports infrastructure in India.

The project’s successful completion, despite the challenges posed by the pandemic, stands as a testament to the skill and dedication of the thousands of workers and engineers involved in its realization.

Thank you for joining me for Projects of the Past. I’m Ren Love, see ya next time.

Constructing AI Prompts

WENDY GROUNDS:  Rich, we want to talk about the prompts. And how to deliver coherent, engaging responses.  Now, there’s a template that you have in the book for constructing AI prompts.  It’s very helpful.  Could you talk through that a little bit for our audience so they can get an idea of what we’re talking about?

RICH MALTZMAN:  Sure.  And we have some examples.  So, the three are Zero shot, Few shot, and Chain of Thought.  We didn’t intend for them to rhyme, but they do.  So, zero shot is basically – and I’ll give you the example we have in the book.  You would ask a question, for example, you’re dealing with virtual teams. 

So, you would just ask it an open-ended question.  “What are the effective strategies for dealing with virtual teams?”  And it will come up with answers; but it, again, won’t know much about the context.  Where are your teams?  Are they in different time zones?  Do they speak different languages?  Do they have accents?  Do some of them use acronyms and some don’t?  None of that’s there; right? 

So, you get an answer, and it will be generic and somewhat helpful, but not focused and cohesive.  So that’s zero shot.  And a lot of people just use AI that way.  Especially, I can tell you from experience, especially students.

Few shot, you would give it some examples.  You would ask more specifically what you want.  Now, this is a different example.  You know, write a professional-sounding email to a stakeholder that’s going to XYZ, that needs to tell them this or that.  Or you’d supplement that with some examples.  Here are some examples of similar professional emails.  I want it to have the same flavor and tone.  So that’s a step up, more intelligent.

And then chain of thought.  Again, this is going to be where you have that conversation; right?  So, for example, solve a complex problem such as creating a business case for a project, a rationale for a project.  So, you prompt it.  You know, what factors should we think about?  Look at that answer and say, hey, based on the factors you’ve given, calculate a rough estimate of things that we have to think about in terms of selling this product. 

And then you take that output and say, explain how each of those factors impacts our overall savings and our overall strategy.  Again, a conversation.  So, chain of thought is much more in line with the both/and approach.  It’s a back-and-forth with a cohesive answer.

The Delphi AI Method

WENDY GROUNDS:  All right.  Can you explain what the Delphi AI method is and how it works?

RICH MALTZMAN:  Sure.  We got a little cute with the word “Delphi,” and kind of combine those two together.  So, I think most project managers will know Delphi.  But just in case, a very quick review, I tell my students there’s really two words, and you understand Delphi:  iterative, anonymous.  Delphi is a technique to try to achieve consensus. 

So, let’s say the three of us are trying to come to a solution.  We would each anonymously propose a solution.  And then through iteration, through going through it again, without knowing that any particular one of us had that opinion, because you sometimes associate opinion with a person – as humans it’s one of our biases – either favorably or disfavorably.  You don’t have that bias because you don’t know where it came from.  It’s like blind judging.  It’s like judging blind.

So that’s the Delphi method; right?  And it’s meant to take out bias and interpersonal alignments or disagreements.  If you put AI in it, and you combine this to call it Delphi or Delphi AI, it’s a natural evolution to include AI as one of the team members.  And you mentioned that that’s how you should think of AI.  So, you just have it as one of the anonymous participants. 

But and we have an image in the book that’s kind of a beautiful example of this.  AI lacks this nuanced understanding of the project-specific factors.  In fact, we give an example of a Delphi AI result without a conversation.  And we asked it to draw a picture of what this looks like.  And it drew a picture of exactly what it doesn’t look like.

So, you have to have the – sorry, it’s a teaser, you have to have the book for this.  But there’s an image where we basically asked it to digest this and show it visually.  And what it showed was basically something that made it completely, what’s the opposite of anonymous?  Nonymous?  It had people sitting together in a room.  The whole idea of AI is you don’t know where it’s coming from. 

So, this is done in rounds.  This is real.  I’ve participated in a Delphi survey.  It was about environmentalism.  And there can be a lot of politics and bias and sustainability in environmentalism.  And this was to try to rank the top 10 contributors to human impact on the planet, whether or not you believe in that.

And so, through several iterations – and some of us are in the same chats and so forth, so we know each other.  Oh, we say, oh, that’s Gary.  That’s Karen; you know?  Without knowing that, we got this top 10 list into a list that everyone agreed this is the right ranking, where my number seven may have been someone else’s number one and vice versa.  By the time we were done with three or four iterations, we had what we all agreed was a really well-established list.  Now the only difference with Delphi AI is that one of those participants would be a chatbot that was fed with good contextual information.

BILL YATES:  So, with Delphi, you don’t know which one is the ChatGPT or whatever AI tool you’re using.

RICH MALTZMAN:  Exactly.  And that’s why the picture in the book is so funny.

BILL YATES:  That’s good.  I like that teaser.  I’m just going to leave it right there.

RICH MALTZMAN:  Yeah.

Overcoming AI Hallucinations

WENDY GROUNDS:  Yeah.  Another thing we want to also talk about is detecting and overcoming AI hallucinations.  Can I share my story?

BILL YATES:  Yes.

WENDY GROUNDS:  So, this morning I was reading the news on my phone, and I discovered an amazing story.  I just needed – here we go.  So apparently there’s a weird phrase that’s plaguing scientific papers, and it’s been traced back to a glitch in AI training data. So earlier this year, scientists discovered a peculiar term appearing in published papers, and it’s vegetative electron microscopy.  And it sounds technical, but it’s actually nonsense.  And they say it’s become a digital fossil.  It’s an error preserved and reinforced in AI systems that’s becoming impossible to remove in knowledge repositories.  So basically, what they’re saying is that it could be a bad scan.

So, there were two papers from the 1950s that were published.  And when it was scanned and digitized, it erroneously combined vegetative from one column and electron from another, and it created this phantom term. 

Or they say it also turned up in some Iranian scientific papers.  In 2017 and 2019 they used the term in English captions and abstracts.  So, it appears that in Farsi the word “vegetative” and the word “scanning” differ by a single dot.  So it could be that dot was missed, and now we have vegetative electron microscopy.  And it’s in about 22 papers, according to Google Scholars.  So, hallucinations are real, and they can be affecting our scientific research.  So, what’s your advice on this?

RICH MALTZMAN:  This is awful.  My grandson was thinking of majoring in vegetative.  I have to tell him it’s not real.

WENDY GROUNDS:  It’s not real.

BILL YATES:  It’s not real.  VEM is not real.

WENDY GROUNDS:  He’s been fooled.

RICH MALTZMAN:  No, I’m not that old.  Not that old.  That’s a great example, and it proliferates.  It can proliferate, if there’s too much trust, and there’s no conversation.  And yes, in Semitic languages like Farsi a dot makes a huge difference, and that’s an example of nuance.  So, yeah.  So, dealing with hallucinations, we’ve just heard how they can be prolific and potentially dangerous.  Always verify what’s coming out of AI with trusted sources, even other AI systems, but also subject matter experts, remembering always that AI feels compelled to give you an answer.

Now there’s some new techniques that can be used – and this is real, unlike the example you gave with vegetative whatever that was – something called retrieval augmented generation, or RAG.  So, this is a promising technique.  It reduces hallucinations by grounding AI outputs and external factual data sources. 

This is really what we do with the SQUID that I talked about.  Instead of relying solely on its internal memory and especially the World Wide Web, RAG systems get real documents from, you know, your company knowledge database, from best practices – as project managers hopefully you have a best practice library – and generates answers based on that retrieved content and not only what it’s getting from, you know, its general inputs.

Asking AI for Advice

BILL YATES:  Rich, I usually don’t think of AI as, let’s say, an emotional intelligence czar or someone who should be coaching me on how to better interact or better communicate with my team.  Yet it makes sense; right?  I could see practical applications of, okay, feed in the audio, maybe I record the audio of my last team meeting.  And I feed that to AI and ask it for advice.  Hey, how did I do as a leader?  You know, how did I do as a communicator?  So, at the grassroots level that seems kind of counterintuitive to me because I’m asking a machine how well I communicated with other humans.

RICH MALTZMAN:  Yes.

BILL YATES:  Talk to me about that.  What are some practical applications that you’re seeing in this area?

RICH MALTZMAN:  Yeah, I’ll give a cultural reference here.  I don’t know if you recall I lived near The Hague for two years, in the Netherlands.  Dutch culture is very direct.  Tell it like it is, very fact-based.  Your living room is ugly.  This is a good example; right?  An American walks into a newly designed living room, and it’s got a disco ball and purple-and-yellow striped wallpaper.  And the American will typically say, “Interesting choices,” or “How unique.”  I mean, that’s what I would say; right?  You can tell they’re proud of it.  And you walk in; you would kind of soften your answer.  Not in the Netherlands.  “That is ugly.”

BILL YATES:  What were you thinking?

RICH MALTZMAN:  Yeah, “What were you thinking?  Where did you – why?  Why?”  And that’s good in general; right?  It’s a little striking; but it’s good because it’s honest; and, you know, they’re telling you what they think.  And they’re not beating around the bush.  That is a personality trait, or can be, of AI.  So, if you’re doing things to other humans, other team members that aren’t good, you can get coached by AI to say you know this would be a better way to phrase that. 

And indeed, in the book we have an example where a project manager who’s been facilitating needs is just fed up with Jacques.  Jacques happens to be a person who is nay-saying your project, is telling you this won’t work, that won’t work.  They’re really bringing down the whole project.  And they’re nervous, as well.

So, in this scenario the project manager says to this person, “If you can’t take the heat, get out of the kitchen.”  And, you know in a really abrupt way, in a very Dutch way, you could say.  And it has a negative effect on the whole team.  It’s less psychologically safe now to raise your hand and raise a problem.  So, the project manager has enough emotional intelligence to realize that that was not good, and asks, you know, what do I do now?  So that’s after the fact; right.  This is after the fact.

But there’s also, before the fact, you can have AI help you in your messaging.  I want to write a note to Jacques, and I want to let him know how I really feel and that, you know, that I do have some problems with the way he’s doing it, but I do want to invite him to participate.  You can ask AI; and, again, counterintuitively and surprisingly, it can prepare an email or coach you for a conversation with Jacques in a very good way.  However, again, don’t take whatever it gives you verbatim; right?

BILL YATES:  That’s not your script to just go in and read off, yeah.

RICH MALTZMAN:  Yeah.  Your script may be awful; right?  But you will have some aha moments, and this is one of the reasons we wrote the book.  We found that there were a lot of people writing about power skills.  In fact, PMI itself had some really good thought leadership on the power skills like empathy, communication and so forth.  No mention of AI.  And then PMI published a guide to prompt engineering.  Absolutely no mention of power skills.  So, this book is really the attempt to merge those two ideas.

And again, it’s exactly, like you said, it’s so counterintuitive to ask what in effect is a data center, how do I deal with human beings?  Help me with my interpersonal relationships.  Like, no way.  I’m not going to ask a computer about that.  But remember the computer’s been fed with billions of examples of interactions gone wrong and interactions gone right.  It has some advice.  And by the way, they’re coming from other humans; right?  So, with a conversation you can prepare yourself to write an email to your boss for a promotion, to apologize to someone for something that went wrong, to get your team to buy into a project rationale.  It can help significantly with that.

AI Training for Project Managers

WENDY GROUNDS:  From what you’re saying, project managers need AI training.  Can you talk a little bit about what’s your take on that?

RICH MALTZMAN:  Sure.  So, there’s training for us, and there’s training for the AI itself as we’ve talked about.  So, I think the first thing is to understand what it can and cannot do.  And one of the things we talk about, some of you might be familiar with the DIKW pyramid – data, information, knowledge, wisdom.  And dealing with PMI’s new philosophy and new empowerment here with AI they’ve actually taken in Cognilytica as a company.

And we actually have content in our book from Kathleen Walch and Ronald Schmelzer, who are the principals of that company.  They also have a good podcast called AI Today.  In any case, they talk about the fact that we over-rely sometimes on AI, and training ourselves on what it can and can’t do is really important.  So, prompt engineering is important.  Understanding how to have this conversation with AI is important. 

We can use AI for repetitive tasks.  It can do scheduling, again, as a first pass.  It can do scheduling and budgeting and so forth, and project reporting.  And as we mentioned before, it actually can also check in on team morale.  If we were to do this meeting with an AI agent, it would say “Rich talked too much,” or “The conversation diverted too much away from the main topic.”  That’s good.  That’s really helpful coaching for a meeting facilitator.

Bottom line, as far as training, besides just advertising our book, which I can’t help doing because we really feel pretty proud of it, we say that this field is moving very, very fast.  Training is not a one-time thing.  You have to be updated.  New features are coming along quickly.  The video capabilities.  I could talk for an hour about an avatar we created of me where for all the world it looked like me on video saying things, and I was not saying those things.  So, in effect a deep fake.

 So continuously being aware of new capabilities, not just that it’s better, but literally new capabilities that it has is something that will be passed by if you have a corporate training session on AI in August of 2024, it’s out of date in September of 2024.  So there has to be a conscious updating of the staff and the stakeholders as to what capabilities it has.  You have to be future-ready is the bottom line.  And that’s hard now.

BILL YATES:  Yeah, it is.  Kudos to you and the other authors.  Your book is a great effort in that regard, really to remove some of the mystery of AI and help project leaders look at it and go, okay, what do we know today?  What are the capabilities?  What are the limitations?  What are the hallucinations? 

Excellent book.  And here’s one of the things I like about it, Rich.  Yeah, it shows theory.  But then it goes right down to practical, even to the point of giving templates and approaches to doing things.  So well done with that.  I appreciate the examples.  They helped me get my small brain around concepts.  Very well done.  And thank you for that contribution.

RICH MALTZMAN:  Thanks.  Yeah, we take some pride in that, and that’s because pretty much all of us that were authors have not spent our careers in academia.  We are in academia.  Many of us had parallel careers.  But just between Loredana and myself, at Nokia we have 60, 70 years of experience in telecom project management, PMO directorships and leading projects and managing hundreds of project managers, and even with Velociteach getting many, many people successfully certified for their PMP.  That’s bruises and scrapes and bumps.  That’s not theory.  And we’re taking advantage of all that first aid to provide aid for people with actual, you know, prompts and conversations and examples.

Rich’s Co-Authors

WENDY GROUNDS:  Rich, why don’t you just mention who the other authors are on your book.

RICH MALTZMAN:  Yes.  So, the four authors are Dr. Dave Silberman, myself, Dr. Loredana Abramo, and Dr. Vijay Kanabar, who is also the director of the project management program at Boston University.

Get in Touch

WENDY GROUNDS:  Can you tell our audience how to get in touch with you if they have any questions?

RICH MALTZMAN:  I am very active on LinkedIn.  That’s where I’m most responsive.  There’s also an About Me page, so about.me/richmaltzman.  And also, I do blog for PMI; so, you’ll find my two to four posts a month on the People, Planet, Profit, and Projects blog on ProjectManagement.com.

BILL YATES:  Outstanding.  Thank you so much for giving us your time today, Rich.  It’s so great to catch up with you, and really enjoyed reading this book.

RICH MALTZMAN:  Thank you.  We enjoyed writing it.

BILL YATES:  It’s a helpful, helpful book.

RICH MALTZMAN:  Excellent.  Thank you.

Closing

WENDY GROUNDS:  That’s it for us here on Manage This.  Thank you for joining us.  You can visit us at Velociteach.com, where you can subscribe to this podcast and see a complete transcript of the show.

You’ve also earned your free PDUs by listening to this podcast.  To claim them, go to Velociteach.com.  Choose Manage This Podcast from the top of the page.  Click the button that says Claim PDUs, and click through the steps.

Until next time, stay curious, stay inspired, and keep tuning in to Manage This.

Comments



Leave a Reply

Your email address will not be published. Required fields are marked *

PDUs:

0.5 Power Skills
0.25 Business Acumen

Podcast PDUs – FREE

PMP Certified? Follow our step-by-step guide to claim your FREE PDU credit with PMI for listening to Manage This podcast.

Subscribe to Podcast

Stay connected and get notified of every new episode.

Listen on Apple Podcasts
Listen on Spotify
Listen on Amazon Music
Listen on Youtube

Subscribe to Email

Join our PM community and select the types of updates you’d like to receive.

Recent Episodes