The Future of AI in Business Mastering Effective Interaction and Resource Planning with Conor Grennan
In this episode of Future Finance Show, hosts Paul Barnhurst and Glenn Hopper dive deep into the revolutionary impact of generative AI on industries, with a particular focus on its implications in finance. They explore how AI transforms workflows, enhances productivity, and shifts the focus from tasks to higher-level thinking. The conversation breaks down common misconceptions about AI, such as the overhyped role of “prompt engineering,” and emphasizes how businesses can integrate AI while maintaining data privacy and security.
Conor Grennan, the guest of this episode, is the Chief AI Architect at NYU Stern School of Business and the founder of AI Mindset, a company helping leaders adopt AI effectively. Beyond AI, Conor is a New York Times best-selling author, recognized globally for his humanitarian work. His expertise lies not only in AI implementation but also in helping non-technical individuals understand and utilize AI's potential.
In this episode, you will discover:
The reality behind generative AI myths and why prompt engineering is overrated.
How AI transforms jobs by automating tasks, not replacing roles.
Ways finance professionals can balance AI efficiency with data privacy concerns.
Why adopting an AI mindset is crucial for leveraging its full potential.
How embracing AI in your work can give you a competitive edge in your career.
For finance professionals interested in harnessing the power of generative AI while balancing efficiency and security, this episode offers practical insights and frameworks. Conor Grennan’s unique blend of technical knowledge and relatable, human-centered teaching makes this a must-listen for anyone looking to stay ahead in the AI revolution.
Follow Conor:
LinkedIn: https://www.linkedin.com/in/conorgrennan/
Website: conorgrennan.com
Join hosts Glenn and Paul as they unravel the complexities of AI in finance:
Follow Glenn:
LinkedIn: https://www.linkedin.com/in/gbhopperiii
Follow Paul:
LinkedIn: https://www.linkedin.com/in/thefpandaguy
Follow QFlow.AI:
Website - https://qflow.ai/future-finance
Future Finance is sponsored by QFlow.ai, the strategic finance platform solving the toughest part of planning and analysis: B2B revenue. Align sales, marketing, and finance, speed up decision-making, and lock in accountability with QFlow.ai.
Stay tuned for a deeper understanding of how AI is shaping the future of finance and what it means for businesses and individuals alike.
In Today’s Episode:
[01:59] - Concerns about data privacy in AI
[08:32] - Introduction of Conor Grennan
[10:10] - Conor's background and AI journey
[12:40] - The role of a chief AI architect
[16:42] - The truth about prompt engineering
[20:54] - The AI mindset and learning curve
[27:55] - AI’s impact on jobs
[35:23] - Conor's AI course and framework
[37:29] - Conor’s book: Little Princess
[42:47] - Conclusion and final thoughts
Full Show Transcript
[00:01:29] Host1: Paul Barnhurst: Future finance is brought to you by qflow.ai, the strategic finance platform. Solving the toughest part of planning and analysis B2B revenue align sales, marketing and finance seamlessly speed up decision making and lock in accountability with qflow.ai.
[00:01:59] Host2: Glenn Hopper: So one thing I hear over and over from finance folks who want to use generative AI, but haven't yet, is that they haven't gotten comfortable with data privacy and security with these cloud based tools. And, you know, it makes sense because when you're dealing with sensitive financial information, the idea of uploading that data into the cloud can set off all kinds of alarm bells, no matter how secure the platform claims to be. But here's the problem: the most powerful AI models live in the cloud, and that's where you find the models with billions of parameters trained on huge amounts of data. That's where ChatGPT, Gemini, Claude and all those models are. And, you know, I've talked a lot before about how these providers offer premium accounts where your data isn't used for training the models. But for some businesses, especially those with strict data policies, even that's not enough. They want to keep their data on their own servers, period. So I've been thinking a lot and playing around a bit with Meta's Llama. So llama offers a couple of smaller AI models that businesses can run entirely on premise, no cloud involved. It's a pretty unique approach in the AI world, and these models are getting better and better. So when we talk about llama, we're talking about a kind of a family of models.
[00:03:11] Host2: Glenn Hopper: The three most recent ones are llama 3.18 b, llama 3.17 DB, and then the large model 405 B, and the numbers after the version represent the number of parameters in each model. And that's essentially the level of complexity and sophistication of the model. So llama eight B and 70 B are the small, locally runnable models that I was talking about. And they're designed for everyday business tasks, optimized for speed and for small data footprints. But there's no getting around the fact, you know, they're just not going to be as powerful as these massive models. So llama 405 B, on the other hand, is a monster. It needs serious hardware and runs in the cloud and only in the meta environment, but it goes toe to toe with the industry leaders in terms of raw capability. So it's a spectrum. The smaller models give you that data security and low latency, but you trade off some of that high end performance. And it's really all about finding that right balance for your business needs. So, you know, with these smaller local models, they do have some limitations. They can't match the raw power of big cloud based models and complex tasks like, you know, generating detailed financial forecasts from these massive data sets. They may not work as well with the smaller models, but for a lot of everyday financial work, think document analysis, basic number crunching.
[00:04:31] Host2: Glenn Hopper: These local models can get the job done, and they do it without any of your data ever leaving your control. So it's a trade off. On one side, you've got the heavyweight cloud models that you know, you've got to come to terms with what your data policy is going to be and what you choose to put or not put in those models. And then you've got the kind of, well, the rock solid data privacy of running AI in-house. You just don't have the high end power. And in finance, and for a lot of businesses, that privacy is going to be a top priority. So I think the good news right now is that these smaller local models are getting better all the time. Every time, you know, whether it's the llama models or even the other LLMS out there are creating smaller models like ChatGPT 4 0 Mini is one that still runs in the cloud, but there we're seeing better and better performance on these smaller models, and we're going to see them. You know this, these are the worst models we're going to get. They're going to keep getting more powerful. And as they do, people will be able to keep that local control and have access to better models.
[00:05:31] Host2: Glenn Hopper: So you know, if you're in finance and you've been looking at AI but worrying about the data side of things, keep an eye on what's happening with these smaller locally run models. They might just end up being the solution you've been looking for in a way to tap into the power of AI without compromising on data privacy. Of course, it's all about finding the right tool for your specific needs, but with options like llama in play, businesses that put data privacy first have more choices than ever. And in the world of AI and finance, that's a pretty big deal.
[00:06:00] Host1: Paul Barnhurst: In this week's episode of You Know Your Job Is Safe For now, I'm not going to talk about the silly or ridiculous things that AI has said. How hallucinate, how many of the answers can be less than ideal and make us feel very safe in our roles? What I want to talk about is what we can do to make sure we're safe in our roles. Because yes, AI makes mistakes. Yes, AI is not there. We haven't hit AGI, artificial general intelligence. It's ChatGPT laid out. There are five different levels and right now we're on level one, the conversational AI. We have a long ways to go to get there. In the meantime, what can we do as finance professionals to be prepared? First and foremost, find out how you can implement technology and AI into your workflows, whether that be using it to help write letters. An example I know of somebody early on that used AI to help write a collection letter, and they collected over 100,000 from a customer. I know Christian Martinez has shown many ways you can use AI and Python to enhance your analysis. Seeing people use AI with VBA. I've used it in my work. Oh, I need a quick script to do this or do that.
[00:07:19] Host1: Paul Barnhurst: I ask AI to write that VBA have it help you write that formula. Analyze data. Another great one from Jim O'Neill in a prior episode is he talked about how it can be the contrarian. Ask it for all the risks and all the reasons you shouldn't do something. All the counter reasons. So the more specific you can get with AI, the better the answers you can get from it. But the reality is you need to learn how to start using it so you can be more effective. Many people do not impact that. AI will take away a lot of jobs in finance, but one thing it could definitely do is companies will start looking for a productivity tax, almost an efficiency like, hey, we're not going to give you this many employees because we expect you to be 10 or 15% more efficient. You may delay hiring. You may have to justify every headcount and say, why can't I do it? The better you are at using it, I. The more efficient you are, the easier it is to maintain your role because you're being very productive.
[00:08:32] Host2: Glenn Hopper: Welcome to future finance. This week Paul and I welcome Conor Grennan. Conor is the chief AI architect at NYU Stern School of Business and the founder and CEO of AI mindset, where he's helping leaders across industries harness the power of AI. Conor's, also a New York Times and number one international best selling author. His influence extends far beyond the classroom. He's been recognized by The Huffington Post as a game changer and honored with the Unsung Hero of Compassion Award by the Dalai Lama. Conor holds an MBA from NYU stern, a degree from the University of Virginia, and lives in Connecticut with his wife and two children. Conor, welcome to the show.
[00:09:10] Guest: Conor Grennan: Thanks for having me. It's good to see you guys again.
[00:09:12] Host2: Glenn Hopper: Yeah, I feel like we would need, like, two hours to talk to you, to talk about everything you've done. But we're going to get laser focused here and talk about.
[00:09:18] Guest: Conor Grennan: Right. Yeah. Glenn, I could hear you talk for hours. I've watched all of your courses. Highly recommend. They're awesome.
[00:09:25] Host2: Glenn Hopper: Yeah. And we're going to get into your course. Yeah.
[00:09:28] Host1: Paul Barnhurst: So what I'm going to do so we get this done in time, is Glenn can ask any questions. I'll cover this podcast.
[00:09:34] Host2: Glenn Hopper: Yeah, I do tend to ramble sometimes. Paul and I have a thing about that. I think I make our podcasts twice as long as they need to be.
[00:09:39] Guest: Conor Grennan: No worries.
[00:09:40] Host1: Paul Barnhurst: But he made some good. He's great.
[00:09:43] Host2: Glenn Hopper: So I guess let's just let's dive right in. So the first time that I saw you speak, it was such a I think we were doing an event together. Conor. It was I think for chief executive group maybe. And your approach to AI has been, I quote you all the time and whenever anybody what I love, you always lead with that. You're not a technologist. You know, you're here at the forefront of generative AI and training people how to use it, but you make it very clear that you're not a technologist. So, I mean, I guess. Tell me a little bit about your background and what happened that caused you to just I mean, you've gone all in like few people on generative AI. So what happened in that transition that you saw this and decided just to go all in on it?
[00:10:22] Guest: Conor Grennan: Yeah, it was a strange transition. So my background is in, you know, academia. I'm a writer by background, things like that. I had spent the last ten years as dean of students, dean of MBA students for NYU stern. So that you can imagine kind of like that profile. Right. And what happened was essentially ChatGPT came out, I saw it, my wife is in AI at McKinsey, but just sort of doing responsible AI and she saw it. She's like, you got to try this thing out. I tried it out and immediately saw, wow, this is going to change everything. So I just, I was asking my boss, the dean, hey, is there what do we have going on this? And I started attending lectures by, you know, AI people, machine learning people, things like that, which we have obviously at stern. And I kept realizing over and over again as I started to learn more about machine learning and everything that I didn't need, that there was nothing to learn. It didn't help me with ChatGPT and other large language models. And that was really, really curious to me. So I started to think, well, I think this is just we just have to sort of learn it. So I started to teach it at NYU, and then another strange kind of thing happened, which is I started to do this with a lot of companies and help them out with this.
[00:11:30] Guest: Conor Grennan: But the other strange thing that happened was I had a framework and a model that's like, hey, easy use cases to advance use cases, etc. and I started seeing something really strange, which was the fact that this wasn't transforming how people worked. They would take the use cases I gave them in healthcare or whatever, and keep on using those healthcare use cases even after I left. I'm like, but I don't know anything about healthcare. And I realized, okay, this is actually something really, really different from any kind of typical digital transformation. And so for me, I think it's actually really helpful that I don't have a tech background just accidentally because, you know, I'm relating to all these folks who also don't understand it. And I think sometimes if you have too much tech, you don't know what the hard part is. But I'm like, no, the hard part is this. So it's almost like learning basketball from an eighth grade coach instead of Michael Jordan. I'm like, you know what you really need to do? So yeah, it's a strange background, but it's also a strange technology. And that's how I got started.
[00:12:22] Host1: Paul Barnhurst: I love it. That's a great way to get started.
[00:12:24] Host2: Glenn Hopper: Yeah, I know we were talking about the switch that Conor made from dean of students to Chief AI architect, and you kind of addressed that there. So, Paul, I don't want to I didn't want to steal your thunder on that question. Yeah.
[00:12:36] Host1: Paul Barnhurst: No, I'd be curious. You address that a little bit, but what's it like being the chief AI architect at at stern? What does that involve? Obviously you went from dean of students. You really got involved in this AI to being the chief AI architect. So what exactly does that mean for the students? What does that role involve?
[00:12:53] Guest: Conor Grennan: Yeah, it's a good question, right. Because we're really in uncharted territory here to a certain extent. You know, usually in an academic setting, there is a process by which you learn things and there's a learning curve to it. Et cetera. Et cetera. The strange thing about generative AI is that it's not really a learning curve kind of thing. You know, it's not like, you know, if you were learning Excel or something like that, which I know you guys are very familiar with. You know, you take a long time to learn it, but then you just go, then your life is easier. Then you're, you know, doing financial modeling and everything else. Same with if you were learning French or something. It takes a long time to learn, but then, you know, then you get to speak it, then you get the reward. This is much more akin to something like, you know, exercise or something like that where it's nothing. You have to learn. You just have to change your behavior. And so it's not really like, okay, here's the curriculum by which everybody has to do this. It's much more like, well, who do we want to be as people? How do we change our culture? Like, you know, it's that big change management question which is much harder.
[00:13:48] Guest: Conor Grennan: So being in uncharted territory, I'm sort of trying to carve this out with obviously, you know, help from people much smarter than me. But my mandate really, at NYU is to essentially level up the organization. And by that, I mean we want our students, our MBA students, to go out into the workforce really being fluent in generative AI. We want our faculty to understand how to integrate this into their curriculum, because as I don't have to tell you guys, you know, the proxies for grading have all sort of been set on fire, right? I mean, like around school settings. So they have to change how they work. Now, it's hard for faculty who have been very successful for two decades doing this the way they do it, how all of a sudden do they change how they teach? Not exactly overnight, but pretty close. And then also leveling up the administration. So stern is functioning as a really well- oiled machine that's a little more similar to the work I do with companies because it's, you know, running a business. The faculty and the student are very, very different. They're all very different things. But I have to approach them in all different ways, you know?
[00:14:46] Host1: Paul Barnhurst: Yeah, it is fascinating, especially the part where you said, you know, teachers and just the change. Right, because students are using it and there's the question of where's the right way to use it versus when you want them not to use it for learning from the exercise and just managing and balancing all that. I imagine there's a lot of different opinions that you're on the subject that you're trying to manage.
[00:15:08] Guest: Conor Grennan: The hard part. I think I've been trying to come up with an analogy for this for the longest time, and I think I finally did, which was when we say people know it and some people don't know it, there's not a metric. There's no way of really knowing whether people know it or not. So the analogy I sometimes use to go back to Excel is if I'm talking to 500 people at a keynote or something like that, and if I say, hey, who here uses Excel or knows how to use Excel, most of the hands would go up. And then, you know, because people are using it for, you know, grocery lists or personal budgeting. But then if I said, okay, so who in here has used it to create sophisticated financial models in order to, you know, price out $1 billion company? You know, that kind of like an investment banker type use case with like five hands would go up or something like that. That's like generative AI has everybody has tried it, but who's using it 50 times a day, etc.. And there's not really a good way of parsing that out, so that makes it tricky. So even with students who feel like, oh, I know how to use it, I'm like, okay, so and not not to even not to test people. But I'm like, how many times do you use it? Are you paying for a model? Have you heard of or used Claude, or have you, you know, like things like that. Those are some of the ways that we can start to sense, like, how are people actually using it?
[00:16:14] Host1: Paul Barnhurst: 100% agree there's all these different levels. What does it even mean to really use it, like you said? Because are you using it to check an email.
[00:16:24] Host2: Glenn Hopper: Which is actually you just threw me a softball pole for my next question here. And I've been that was deliberate.
[00:16:30] Host1: Paul Barnhurst: I know what's coming. I know this is your ax to grind.
[00:16:34] Host2: Glenn Hopper: I'm going to get the video from this. And every time someone mentions this to me, I'm just going to play the video of Conor here. So one thing that drives me crazy. So, Conor, you and I both teach a lot of AI courses, and every course I teach someone will raise their hand or their digital hand and say, are you going to send us the prompts? I think we share this. It drives me insane whenever I hear someone talk about being really good prompt engineer. And that's not to belittle what you know if you're. There are areas.
[00:17:02] Guest: Conor Grennan: Technical prompt Engineering.
[00:17:04] Host2: Glenn Hopper: Yes, the way most people mean it is like saying I'm really good at googling, you know. So I always refer back to you on this when I say the word prompt engineer to you, where does that take you?
[00:17:16] Guest: Conor Grennan: Yeah, it sends me to sort of like barfing in a bucket, right? I mean, because it's so frustrating. Like, I'm with you, right? Because I think this is where people started off, and I have a theory around this that I want to run by you, too. This is what I talk to people about now as well. Is that the reason that we want props and people want to learn? Prompting and all that kind of stuff is that our brain does certain things really well. It does like pattern prediction, automation, things like that. And your brain wants use cases, it wants clear patterns. It wants to know like it is not comfortable with something where you just say, hey, use this in any way that you want to use it. People are like, okay, but what are the prompts? That's why, you know, ChatGPT started having the conversation starters and things like that, right? There's no reason for those conversation starters. They're so dumb, right? Like it's like, hey, plan a trip to Peru. Nobody's going to do that. Or, hey, help me do a recipe for, you know, chicken marsala. Nobody needs to do that. But it's for the brain. It's so that people can see, like, see, you can just type in this or you can just type in this prompt. Engineering to me, is borderline myth. I know that more clear prompts will help, but everything that you do in a prompt is just how you would instruct a new colleague. Give them context. Give them examples. Tell them what the output should be. Give them sort of like a sense of like, hey, this is what's good or not.
[00:18:26] Guest: Conor Grennan: But the more context you give, the better. It's not about framing something, but people think that it is understandably, because that's the history of technology. You need to sort of like, well, tell me how to use this. You can't just say, well, talk to it like a human. That's not helpful. And so the thing about prompting for me is that people want like this list of prompt libraries, which even back in the day, Glenn, you and I have known each other for a while now, like when people would send like, oh, well, here's a thousand prompts. You and I would barf over that, right? Because it's like, think about like for a second, like a YouTube video that shows you how to clean your pool filter or something like that. Right. And that has 9000 views. All 9000 people needed to clean their pool filter in that moment. Nobody's just watching that thing just to watch it. So like when you send out prompt libraries, who needs that exact prompt and that it's like, oh, here's my favorite prompt for a sales call. Like, who needs that at that moment? You'd have to be, it drives me bonkers. So the most important thing is talk to it like a human prompting is not a thing. However, that's hard because your brain is seeing this thing and your brain has a hard time. In the same way, if you talked to, you know, trying to talk to a baby like a college professor, right? Your brain has a hard time with it. That's the problem. It's a behavioral thing. It's not give me more prompts.
[00:19:33] Host2: Glenn Hopper: So hopefully I didn't steal this from you. And if I did, I'm going to have to.
[00:19:36] Guest: Conor Grennan: Steal away.
[00:19:36] Host2: Glenn Hopper: Crediting you for it. So what I always say when I'm doing these and people will ask and I'll send them, you know, when I do a training video, it's like, well, here's the prompts I use, but you're not always going to get the same exact response as I did, right? If you're just treating this like a binary like copy paste, copy paste my prompts, you're not going to be going down the same road that I am. So. So the approach that I tell them is it's just like you said, treat generative AI. And right now I mean with after strawberry and you know, when GPT five or opus or whatever they're calling the their next Orion, I guess is whatever OpenAI is calling the next model after that comes out. This may not be true, but for right now, I say treat generative AI, the chatbots you're using like a very, very bright but very, very green intern. So you have to give them their context. You have to set them up. But don't just throw, you know. So when you're prompt engineering, what you're doing is just the way you would like. You said with a new colleague, interact with it this way. And if you, you know, if you get frustrated because there's things I ask you to do all the time and I'm kind of pushing the limits at it as, as I'm sure you are. And then if you're patient, you back up and you come at it a different way. You can usually get there like it's there's very few things that I've just completely stumped it on where they can't go. But if I were just trying to use canned, you know, prompt libraries, then I wouldn't, I wouldn't get there.
[00:20:52] Guest: Conor Grennan: No, 100%. Well, I was going to.
[00:20:54] Host2: Glenn Hopper: Say your whole AI mindset, that's the perfect name for it because you it's a shift in mindset. This is software. Yes, but it's not binary. It's you're not flipping switches and turning knobs. You're interacting with it like a person. So I mindset is just the perfect explanation for what you do.
[00:21:08] Guest: Conor Grennan: Yeah. Thanks. And the way you describe it is exactly right. You did not steal that from me by the way. I've heard that before around interns and everything else. Yeah. So I have this course, and one of the things in the course is this digital course. And one of the things I was doing was coming up with like a lot of different prompting frameworks that I've just used kind of instinctively, but putting names on them. And one of them I called Bezos in the corner, which is like, imagine if you had just Jeff Bezos sitting in the corner and you could at any point turn to him and be like, hey, I'm trying to do this marketing plan or something like that. The problem is that, like, you know, first of all, you have to sort of get into the headspace where you're actually even though it looks like Google, essentially, it looks like a search bar. You have to pretend that you're talking to something like Jeff Bezos. But more importantly, you know, if you really did have Jeff Bezos there, you wouldn't say, hey, give me five ideas for marketing, and then Jeff Bezos gives you five ideas and you're like, okay, get out of here, Jeff Bezos. No, like you'd ask him follow up. Do you know what I mean? Like and that's what's so different about this, is that it's a conversation going back and forth with this machine and our Google brains, I like to call it like are more based on command response. Like you sort of give something and then you get the response and then you walk away. That's what we've been doing since the history of the internet. And so to get in that different frame of mind is that's a behavioral change. It's really hard. But to your point, like, it's not like, hey, here's the right prompt to use. You just have to get into this mindset of being iterative with it, and then prompts almost go out the window.
[00:22:30] Host1: Paul Barnhurst: Ever feel like your go to market? Teams and finance speak different languages? This misalignment is a breeding ground for failure in pairing the predictive power of forecasts and delaying decisions that drive efficient growth. It's not for lack of trying, but getting all the data in one place doesn't mean you've gotten everyone on the same page. Meet qflow.ai, the strategic finance platform purpose built to solve the toughest part of planning and analysis B2B revenue. Qflow quickly integrates key data from your go to market stack and accounting platform, then handles all the data prep and normalization. Under the hood, it automatically assembles your go to market stats, make segmented scenario planning a breeze, and closes the planning loop. Create airtight alignment, improve decision latency, and ensure accountability across the teams.
[00:23:37] Host1: Paul Barnhurst: I love what you said there. The iterative. And I'll add two things. It's kind of funny. You say talk to it like a human. I remember I was typing a prompt and I'll say, please, sometimes just because it's habit, like I'm talking to somebody and the students were like, you say please to, I like, why are you? And I'm like, it's just habit. I don't even think about it. Please do this. And they were just like, you're stupid. And it was kind of funny. And the second is, you know, kids are able to make that mind shift so much easier. My daughter will sit and go back and forth with it for hours if I'll let her on things of creating stories for her, and she'll just be laughing and using it to come up with funny scenarios around her favorite characters. Or when he gets it wrong, she just says, no, this is what I meant. And you can see the conversation in it. And, you know, versus the well, I gave you the prompt and all the instructions. Why isn't it perfect? So it's like I have to hack through this. And I'm like, yes, sometimes you have to hack through it. And they were kind of like, well, then it's not useful. But no, it's incredibly useful. You just have to be willing to put in the time.
[00:24:38] Guest: Conor Grennan: Yeah. This is so I love how you said that. Two things on those. Two things. So the very first framework I ever came up with was last March. It was the first YouTube video I ever put out on this stuff. And I was like, hey, I have a framework because, you know, you guys work in frameworks too, so you can't make fun of me. But like in business school, it's all frameworks. So I was like, I need to come up with a framework. So I called it. Hi, thanks. Great. And I said you should say hi to it. You should say thanks to it. And then great is a little different because that's about giving feedback because you have to give it feedback. But the first two, right to your point, it's just like, why do you say this? I love it when I once saw this one thing that came out and said like, oh, don't say please because it won't give you good enough response. No, I'd rather keep my humanity and get a marginally less response. So I love it when people say hi and thanks and please. And why do I love it? From a sort of a purely strategic standpoint is that it gets you better results, but not because it makes the tool give you better results, it gets you better results because it puts your brain in the right framework, which is this is a conversation. This is not I'm going to talk to this. There's a clip from the movie her, , which you probably have seen and remember, which is go back and watch this movie.
[00:25:43] Guest: Conor Grennan: It is unbelievable. Even now. It tells you everything you need to know about where we are right now. But the cool thing is that 13 second clip, I play it sometimes for people, you know, sort of workshops or companies and things like that, which is he's sitting there, he's already fallen in love with this AI, right? So he's having a conversation. He's literally falling in love. So there's no, you know, oh, I'm not what is this thing. Right. And he's playing a video game. So his brain gets distracted and she's like, oh my gosh. Hey you know Theo guess what? An email just came in and he's like, oh, read email. And she's like, okay, I will read like joking around and what's happening. And he jokes, he's like, oh my gosh, I'm so sorry. But his brain was distracted. And so, you know, muscle memory pulls him back into this, you know, read email thing. But that's how strong muscle memory and neural pathways are, right? And she's like, why are you talking to me like that? I'm not that. So that's number one on why to say please really quick on the kids thing. My theory on that is that kids are very good at being like, I wonder if this can do this, which is usually terrible if you have kids because it usually means, hey, I wonder if I can ride my dirt bike into the pool, stuff like that. It's incredible, right? Because what it does is it pushes them forward. And that's why I think kids are so good at it.
[00:26:46] Host1: Paul Barnhurst: Yeah, no. When you say driving to the pool, I've read a few where I'm like, we might need to have a conversation with my daughter, you know, so that you're like, that's something we should talk about as adults. So come to us and let's have that conversation. But most of the time it's just it's fun to read. So I totally get what you're saying.
[00:27:02] Host2: Glenn Hopper: And I'm going to go I'm throwing a PSA out here. I always say please and thank you. I'm always very polite because when the robots take over, I want them to remember I was one of the nice guys.
[00:27:13] Guest: Conor Grennan: They'll have that tracked. Don't kill him. Don't send the robot dogs over to Glen's house. He's cool.
[00:27:18] Host1: Paul Barnhurst: So you're safe when Terminator comes.
[00:27:21] Host2: Glenn Hopper: So, you know, we talked about kids using AI, and they are, you know, they learn languages faster. They're just. They can approach it in a completely different way. I dealing with finance and accounting, people trying to teach them generative AI. It's amazing how much pushback you get. And I'm wondering what your experience has been like and what I'm most wanting to hear about is because sometimes it gets, you know, you can get frustrated with it where people are just, oh, that'll never work. That can never take my job. And I'm saying it's not going to take your job, but look how it can help you. I wonder you're really teaching people on this mind shift. So I'm wondering if you can give me any experiences. Tell me about when you're talking to people, whether they're taking one of your courses or at one of your talks where they, you know, maybe they're pessimistic or skeptical at the beginning, and then you kind of see this aha moment. What's that like when you see the light go off.
[00:28:12] Guest: Conor Grennan: I think the only way to get there is to demo it. No matter how short a time I have in like a keynote, I've done things where and Glenn and I have spoken at some of the same things before, I think. And it's, you know, even if you only have like, like 25 minutes or something, right, which is a fairly short amount of time. Even then, I always do a demo because if it's otherwise, you're like explaining why a card trick is so cool instead of showing them the actual card trick. And so I think that I have to show them how fast this is, how much it iterates, how much it maps onto what you need. Not you have to sort of like fit yourself into this shape, but it's like talk. What do you need? So I have to show them that you have to. That's number one. Number two from a higher, you know, kind of 30,000 foot view. You know, I always sort of talk about it in terms of like I don't think this thing takes jobs. I think it takes tasks. And jobs are not big monolithic things. You know what I mean? Like, it can't take a job because you don't know what's in that job like, but it does take tasks away from you or speeds up tasks. And so instead of now, by the way, if you are in a job where you have a very limited number of tasks, then that job is at risk.
[00:29:15] Guest: Conor Grennan: It just is like, I mean, I don't want to sort of like throw something like a paralegal under the bus. But if you're if your only job is to summarize long documents, you know, that job is at risk because that task is at risk. And by the way, when I say at risk, when companies hire you, they are hiring you and your brain. The best data lake we have out there is your own brain. And so that by itself ChatGPT and other tools like this, they don't do anything. They just sit there. They need to. I always say like, if I was going against a market, if you said, hey Conor, 20 minutes come up with the best marketing plan you can, you can use ChatGPT. It would be amazing. It would be really, really good. I'd be like, oh my gosh, look at this. Like a five year old who's brought home the, you know, pottery ashtray to his parents. Look how beautiful this thing is. And then but then on the other hand, if you give it to a marketer and say 20 minutes, do the same thing, it's going to be way better. Clearly. Why? Because they know instantly, very quickly.
[00:30:05] Guest: Conor Grennan: That's bad. That's good. I need more of this. I need you know what I mean? Like, they know quality. Like. Glenn, you and I have friends, right? Who are, you know, kind of mutual friends who are phenomenal at GI, like AI generated art, things like that. Right. And why is that? Because they're great at that already. They understand already what art looks like and what art takes. So I guess all that is to say that when people are worried about jobs and things like that, it's a totally understandable thing. Well, I will say practically though, is folks, anyone in the sound of my voice, this lane of being like the generative AI person in your company is still open. I guarantee it. If you can say, hey team, here's how I'm thinking about it. Here's how I don't teach in use cases, but in this case, it's really helpful. Here's the use cases that will speed up our process. I've put out a deck help ChatGPT right. Use ChatGPT to write the deck. I put out a deck. Let's do like a little lunch and learn kind of thing with the team and get the team manager in there and then get their boss in there, etc. this lane is open. Nobody fires the person who knows generative AI right now. That's what I would tell people.
[00:31:03] Host1: Paul Barnhurst: I had to say one thing that I love, that you said there, you know, beyond just the tasks and nobody fires. But, you know, the good talent is going to be okay. Right? If you do a really good job at your work, they're not going to just fire you because, oh, hey, we have generative AI. So I love that because people get so worried about that. And then they get so concerned like, oh, it's just going to take my job. And it's really all about efficiency. And so I really liked your answer there. There's a lot of good information.
[00:31:33] Guest: Conor Grennan: I think that's right. And then to build on what Glenn was saying though, like, but it's really going to help you. And so couple things are happening there at once. Right. It's sort of like you know, pre Excel. You can have the best accountant in the world. But then if somebody else comes along and has Excel, they are going to look a lot or they're at least going to work way more efficiently and, you know, just faster and probably higher quality because they have Excel and kind of to Glenn's point, like if you have this, it's going to speed you up. And one of the reasons that I say that I always start with senior leadership teams instead of just I used to just work with teams. Now, I said, you have to have either a senior leader or I have to work with your senior leadership team first because you have to set benchmarks because otherwise it throws off. You guys have seen this, right? It throws off talent evaluation like crazy. Like if you have a ten person team and two of them are using, two of them were sort of like, you know, on the verge of getting fired, but they're the ones using generative AI all of a sudden. Like, those two are amazing, you know, when in fact, if you just had everybody trained on it, you'd see, oh, they're not amazing. The tool is amazing. So it really throws off talent evaluation, too. That's why everybody has to be using it.
[00:32:34] Host1: Paul Barnhurst: Great point. And one other thing I want to add that I really like you said, is I hear from so many people, I don't need to learn that anymore. Like there's going to be no need to learn Excel functions or why would you want to learn any Python or Dax? And I argue those that at least know the basics, know enough, are going to be more dangerous with the tool because you can validate if it's wrong, you can push the envelope. I think you know, Glenn's experiences and having kind of learned a lot about AI before he came, and those experiences helped push the envelope. Wouldn't you agree, Glenn? Would you be able to do as much if you had the degree and the experiences you've had?
[00:33:08] Host2: Glenn Hopper: No, not at all. And I think on that note, one of the first things that I did that just I had to like, get up and walk away, was I early on this was 3.5 version of ChatGPT, and I got it to write Python code for me. It wasn't back then. It wasn't great at writing code. If I didn't know you know how to QC it a little bit, it would have. I wouldn't have been able to do it back then. Now though, it's right. It writes way better code than I do. So the automation there. And so even Python's integrated into Excel now. Great. A lot of Excel people get Python but they're very slow coders while they're doing it. If you can just say this is what I need, you can prompt it better not because you have a canned prompt, but because you know what to ask and then get those functions much, much quicker. And then as far as like if I have to do something for more than five minutes a day, on multiple days, I'll spend two hours automating it. And if you have this understanding of code or whatever, you know, whatever your domain expertise is, you can figure out how to prompt it and how to check it and know that it's right so that you're not just throwing something over the wall for an AI to solve for you.
[00:34:14] Guest: Conor Grennan: This is a great point. And it's why when people are like, oh, is this going to take my job? I'm like, again, it doesn't have to take. Now look, there are some more vulnerable things like call centers and things like that, but it's not going to take your job if you are using this, because you will stay ahead of everybody else. There's a statistic that came from the Jagged Frontier study. You know, the one that Ethan Malek talks about was involved in last year, where it was like there's a 43% improvement in work by lower skilled workers and a 17% by higher skilled workers. And I did that the early days where I would show that study, the higher skilled workers in the room would be like, well, see, they're going to catch up. I'm like, okay, well, that's not how math works. But like but more importantly, like this is important that you use your skill set and bring it in when like when I'm when you're running code interpreter or something like that. It's running Python code like, but I don't know whether I can trust it or not because I don't know Python. So it's one of these things where it's just way easier for a data analyst to be able to then quickly check your work in the same way that me as a writer, I can sort of see like, oh, this writing is actually really good, where somebody's not a good writer, I can see, oh, that's not good writing. So yeah, these are still essential skill sets.
[00:35:22] Host1: Paul Barnhurst: Totally agree. So I want to shift gears here a little bit. And next I want to ask you a little more about your course. You just launched your course for generative AI for professionals. What was that experience like. What are you hoping people get out of the course? Maybe just tell us a little bit about that.
[00:35:37] Guest: Conor Grennan: Yeah. Thank you for asking. Yeah. It's, you know, put it out kind of recently it was obviously a labor of love like a lot, you know. And I'll bet you, you guys ran into this too. Like there's never a problem of having enough to say. It's like, what do you cut down? So I cut down to 4.5 hours. But don't worry, it's like 67 lessons, four minutes each, all that kind of stuff. Lots of demos and all that kind of stuff. But what I really wanted to do was just codify this all in one place for people and have it accessible, because I get hired by companies a lot, but a lot of people don't have access to that, you know, sort of like being in a workshop or something like that. And so I wanted to have something for everybody. So it really goes through, you know, not from framework, but this whole, you know, thought sort of framework, this AI mindset framework, which is this learn, execute, strategize just a different way of thinking rather than what's the use case for sales? What's the use case for that? Those are valuable if you're in those roles, but not until you really understand what this thing is.
[00:36:29] Guest: Conor Grennan: It also forced me really to codify for myself like, well, what is a large language model? Like, what is machine learning? All that kind of stuff, which I knew basically. But to your point earlier about knowing that stuff is helpful, it helps you to understand why it's hallucinating, not just like, oh, it's pulling from the wrong book in the library. That's not what it's doing. It's statistically thinking. This is probably the next word. The San Francisco Bridge moved to Connecticut in 1863. It almost makes sense, right? But it's just wrong. So understanding those things, and I take it from a really non-technical standpoint, and I just have tons of the prompting frameworks. And I don't mean prompts, I mean frameworks like this. Hi. Thanks. Great model or this Bezos in the corner model. Like again ways to think about prompting. And I just again it's pretty robust course but it was fun to put together.
[00:37:17] Host1: Paul Barnhurst: Awesome. That's really cool I appreciate you sharing that. I did see that on LinkedIn. I thought, hey, I may have to find some time for that here. It looks like it'd be, you know, kind of fun to to go through. So thank you for sharing that. And then I want to ask you, you've been an author of a book called Little Princess and it's about the lost children of Nepal. So can you talk a little bit about the book and that experience? We just love to kind of learn a little bit about that touch on something very different from what we've been discussing so far.
[00:37:46] Guest: Conor Grennan: Yeah, that's right. That was my when I say that I don't have a tech background, my background is in like Nepalese orphans, right? I mean, like, I graduated university, I graduated UVA, and then I went abroad for the next 11 years or something like that, working in public international public policy and then in Europe. And then I went around the world, and I ended up in this orphanage in Nepal. And just right after the war, actually, when the war was still going on, I guess the Civil War. And yeah, just a lot of crazy stuff happened, you know, like I ended up, you know, finding these kids and then losing them and then going to search the mountains for them. They turned out to all be trafficked kids. Like, then I met my wife, who's American, but I met her out there. It turned into and I'm a writer. So I was writing all this as I went and I'm like, oh, this is kind of a crazy story as it's going. And then I wrote a blog and then an agent just sort of like found that blog and she's like, hey, we need to make this a book.
[00:38:30] Guest: Conor Grennan: And it was kind of very right place, right time. So it got kind of big and yeah, that was a big part of my life. Like I just took my son to Nepal basically last year again. And Nepal is still a big part of our life for our family. So it was just a funny thing. I never meant to write the book or anything like that. It just sort of, you know, I'd already written it in the form of blogs, and we basically just took that, made it a book and, you know, it turned out to be kind of like the story that people wanted at that time. I'm not a great writer, I gotta admit. I'm sort of like, I write it like an eighth grade level, but I've done it enough that I know it's entertaining at least. But I think that was appealing for like, college students who are like, oh, this guy sounds like he's like, you know, not that great a writer. He kind of sounds like me. Right? So that was helpful, too, I think. And it was quite the adventure.
[00:39:11] Host1: Paul Barnhurst: It sounds like some amazing experiences there and where you were really able to give back and help, you know, try to make a difference for those children, which I'm sure you did. I look forward to reading the book. I haven't read it. I learned about it preparing for this interview, and I'm like, I'm gonna have to pick that one up. So I'll definitely give that a read. So thanks for sharing that. So we just have a few minutes here left. But we do want to get two more questions in. Well Glenn and I will each do one. So what we have is we use ChatGPT to come up with 25 questions. And we each take a little bit of a different approach. So I'll go first. Here I have these 25 questions generated to get to know you a little better by ChatGPT. And you have two options you can pick a number between 1 and 25, or I could use the random number generator to pick a number between.
[00:39:55] Guest: Conor Grennan: No, I want to keep the human in the loop here. No, I get to pick the number. I'm going with number 12.
[00:39:59] Host1: Paul Barnhurst: You know, you are the first in all episodes to pick the number. Everybody has said random, so I love it.
[00:40:06] Guest: Conor Grennan: No. Too much AI in this world.
[00:40:07] Host1: Paul Barnhurst: All right. So number 12, what is a failure or setback that turned out to be a blessing in disguise.
[00:40:15] Guest: Conor Grennan: Oh, wow. Wow.
[00:40:17] Host1: Paul Barnhurst: We'll give you a minute on that.
[00:40:18] Guest: Conor Grennan: So many. No, I don't even know which one to choose. I can pick a minute. I think one is trying to sort of, like, get a job when I was probably 28 or 29, and that job falling through, and I'm like, you know what? I'm just. I've saved a lot of money. I'm living in Prague or Brussels at the time or something. I'm like, I'm just going to take my money and I'll forget that job. I'm going to go around the world for a year instead of taking, not taking the job that I didn't get. And that led to me going to Nepal. It just like changed my life, kind of going around the world. So and that's happened over and over again for me, anything that I haven't gotten has turned out to be the biggest blessing. And so, I mean, and for me, my faith is important to me. So like but I came to faith late in life too. So now I'm looking back. I'm like, oh, that was apologies. I don't mean to sort of proselytize on your podcast, but for me, I'm like, oh, God was actually steering me in these very different ways that I didn't see. So looking back at, I'm like, oh, right. That's what I was doing. So I really feel like everything happens for a reason and they're all good reasons.
[00:41:10] Host1: Paul Barnhurst: Go ahead and proselytize. That's fine. Yeah.
[00:41:12] Host2: Glenn Hopper: So it's funny that the human in the loop thing, because no one else has kept the human in the loop. My different approach is so right before the show, and today it was really like you were in the green room. We were about to go on, and I looked at Paul and I said, I didn't come up with the questions. So I went this week to Claude instead of ChatGPT, and I used the opus model, and I generated these questions. So I'll normally what I do is I spit out the questions and then I just say, give me one of those questions. But now, since you're a human in the loop, I've got them numbered here. So I feel like we should keep that going. So I'm going to have you pick another number between 1 and 25. 125, but we reserve the right. I don't know. I feel like if we land on a bad one, because I didn't even QC these things before they came out. If it's a dumb question, I'm going to pick one.
[00:41:53] Guest: Conor Grennan: Agreed. Why don't you choose between 6 and 7? You choose whichever one you want to. Seven.
[00:41:59] Host2: Glenn Hopper: I'm going to tell you what they are and then I'm going to go, I'm going to just override.
[00:42:03] Guest: Conor Grennan: I'll do rapid fire. Yeah. Go ahead. Let's see.
[00:42:05] Guest: Conor Grennan: I know myself well though. Do you know what I mean? Like and and that's taken, you know, decades obviously. But like I have recognized in the last ten years or so that I'm a huge introvert, whereas my wife's an extrovert. So I was kind of thought that I was like the big party pooper. So I think for me, when I get like stressed and all that kind of stuff, I know that sometimes it comes. I act like an extrovert all the time, but like left alone, I'm not sure I would see humans for weeks at a time, you know, like so. I think just knowing myself and knowing like that's where I need to be. I just sort of like I can relax and all that kind of stuff if I'm just like in this little sanctuary of mine by myself. So I think it's not one thing. It's just like knowing how I react to things is super helpful.
[00:42:44] Host2: Glenn Hopper: Excellent. And human in the loop. I like that.
[00:42:46] Host1: Paul Barnhurst: You know yourself.
[00:42:47] Host2: Glenn Hopper: You may change the trend now as people listen to this. I say man, things went pretty well for Conor. We're going to have to take power back from the AI overlord.
[00:42:56] Guest: Conor Grennan: I love it.
[00:42:56] Host1: Paul Barnhurst: I think we got a great bunch of great one liners here. The prompting barf comment pet peeves we've had in here. Very professional, very professional. We're going to have some fun short videos from this one. So thank you so much for joining us, Conor. We've really enjoyed it.
[00:43:11] Guest: Conor Grennan: Love seeing you guys. Thanks again for having me.
[00:43:14] Host1: Paul Barnhurst: Thanks for listening to the Future Finance Show and thanks to our sponsor, qflow.ai. If you enjoyed this episode, please leave a rating and review on your podcast platform of choice and may your robot overlords be with you.