Ensuring Fairness and Privacy in the Digital Age

Show Notes

Future Finance: Ensuring Fairness and Privacy in the Digital Age

This week on Future Finance before getting to our guest interview HAL 9000l makes a guest appearance and Glenn has a fun discussion about romantic relationships and AI that you will not want to miss. 

Welcome to the exciting world of Future Finance! Today's episode dives into the exploration of the dynamic intersection of AI and finance. Ben Dooley, the global head of Productized Solutions at Infospace, joins Paul Barnhurst and Glenn Hopper for a thought-provoking discussion on responsible AI, strategic innovation, and the future of financial automation.

Here are the key takeaways to whet your appetite for the full episode:

  • Ben emphasizes the importance of a responsible AI framework that includes human-centric design, fairness, explainability, security, reliability, and compliance to ensure ethical and effective AI implementation.

  • Tailoring AI to meet human needs without mimicking human intelligence is crucial for making AI tools that enhance human capabilities and ensure user trust.

  • The need for AI to be explainable is highlighted, with Ben suggesting models should cite their sources and provide logic for their decisions to build trust and understanding among users.

  • Privacy concerns around AI usage are discussed, emphasizing the necessity for better regulatory frameworks to control who has access to data and how it is used.

  • The discussion on how AI can play a strategic role in finance by improving forecasting, automating processes, and driving strategic transformation within organizations.

  • The discussion touches on the current capabilities of AI in automating traditional finance tasks and the potential future where true autonomous agents could revolutionize the industry.

With over a decade of experience in managing complex projects, driving corporate change, and leading entrepreneurial ventures, Ben is an expert in AI and change management. This insightful episode with Ben Dooley underscores the importance of responsible AI and its transformative potential in finance.

Follow Ben:

LinkedIn: https://www.linkedin.com/in/bendooley/
Website: https://www.infocepts.ai/

Follow Glenn:
LinkedIn: https://www.linkedin.com/in/gbhopperiii

Follow Paul:
LinkedIn: https://www.linkedin.com/in/thefpandaguy

Follow QFlow.AI:

Website: https://qflow.ai/future-finance

Stay tuned for a deeper understanding of how AI is shaping the future of finance and what it means for businesses and individuals alike.

Future Finance is sponsored by QFlow.ai, the strategic finance platform solving the toughest part of planning and analysis: B2B revenue. Align sales, marketing, and finance, speed up decision-making, and lock in accountability with QFlow.ai.

Learn more at Qflow.ai/future-finance.

In today's episode:

[01:16] - Episode Overview and Sponsorship

[03:00] - Responsible AI Framework Introduction

[04:40] - Reliability and Compliance in AI

[06:21] - Current AI Capabilities and Risks

[11:47] - Guest Introduction

[12:14] - AI in Traditional Finance Tasks

[17:52] - Fairness and Bias in AI

[19:40] - Trust and Verification in AI

[22:02] - Strategic Use of AI and Balancing Privacy in Finance

[27:32] - Future Predictions for AGI

[28:52] - Other Exciting Technologies

[31:57] - Fun rapid-fire Session

[34:20] - Best Career Advice

[36:34] - Conclusion

Full Show Transcript:

Host: Paul Barnhurst:: Welcome to the future Finance show, where we talk about.

Host: Paul Robotic:: Why the HP-12c was the best financial calculator ever?

Host: Paul Barnhurst:: Future Finance is brought to you by QFlow.ai, the strategic finance platform, solving the toughest part of planning and analysis of B2B revenue-aligned sales, marketing, and finance seamlessly, speeding up decision-making, and locking in accountability with QFlow.ai. AI is amazing. Over the last year and a half, I think everybody has been introduced to what AI can do. At the same time, there have been those who have been sounding the alarm bells about AI, and the real risks that come with unregulated AI. You have Elon Musk, who mentioned it's our biggest existential threat. You also have Stephen Hawking, who expressed real concern. He told the BBC once that the development of full artificial intelligence could spell the end of the human race. So with these real risks comes a lot of need to manage how we develop AI. So what I want to talk about is what we call responsible AI. Our guest this week is Ben Dooley, and Ben wrote a six-part article about responsible AI, about how we have a responsibility to try to make sure AI is done to not disadvantage people, and not to hurt others. We've seen areas where it's been biased, where it's been disadvantaged, where it's given wrong answers. There's been lots of concerns about AI. So in this six-part series, he covers six areas of AI that I want to talk about. He talked about one, human design, two, fairness, three, explainability, four security, five, reliability, and six number was compliance.

 

Host: Paul Barnhurst:: So these are the six areas of his framework. Let's talk a little bit more about that. So first he mentions human-centric design tailoring AI to effectively understand and meet human needs without mimicking human intelligence. So we need to focus on it helping us, making humans better. Second, it talks about fairness and this is a major concern. How do you ensure it doesn't disadvantage others? Some have talked about needing a model to check AI. So AI gives it an answer and the model checks it for fairness, for other concerns, reliability, etc. before releasing that answer, it's really important to have fairness so that we don't disadvantage certain classes or certain groups. Explainability. often it feels like a black box. Is it explainable? Some of the things it can do is cite its sources, provide its logic, things that allow us to understand what it's doing. Security. We need to make sure there are no data breaches and that people's data that goes into AI is not given away. We've all seen breaches with other tools. So this is a real concern. How do we make sure it's secure? Keep companies' data private. The fifth one he lists is reliability making sure it's consistent, that it works when we need it to work, that it can give the same answers each time, and that it doesn't give one answer, one time and something completely different the next. The last is the compliance. How do we make sure it's the legal, ethical, the social norms we require?

 

Host: Paul Barnhurst:: And what you'll see is different AI tools have taken different approaches because there's not the regulation we need today from a government standpoint to ensure that tools are all ethical. I've asked questions of Claude AI, Then I've asked the same question of ChatGPT, and one tool will say, hey, there's some danger in this question you're answering. You're asking, I can't answer it. The other will go ahead and give an answer. That's an example where they've taken different approaches. Is that good? Should there be a standard level? Those are all questions we need to answer. So this has moved so quickly. That's why some have said, hey, we need to pause on this and figure out what are the proper ethical, legal, and regulatory compliances. We should have to make sure it doesn't go off the rails. I, for one, I'm super excited about what I can do, but I also recognize we need to make sure we do it responsibly. So I enjoy this framework. Just encourage you as you continue to use AI, as you may work with your companies to develop and move forward make sure that the AI you're using is responsible, that you're using it in responsible ways, and that you're expecting it to be responsible, to be fair, to be explainable, to have that proper security be reliable. If we all work together, we can make sure I benefit society much more than it hurts us.

 

Host: Glenn Hopper:: Welcome back to Why Your Job Is Safe for now, the segment where we stop for a reality check on the current capabilities of state-of-the-art frontier models. For all the great stuff they can do, it's clear that the bots aren't quite ready to replace us just yet. After a couple of weeks of headlines around Google's blunders, the news on AI hallucinations has cooled off a bit. But for those of us who are heavy users, we still see our share of mistakes from this. From our digital companions. As a matter of fact, I know several people who spend as much time trying to confuse and disorient these AI-powered chatbots as they do trying to find productive uses for them. I was talking to a friend about this the other day, and I was it reminded me of Sydney. If you guys don't remember Sydney, that was the name that a New York Times reporter gave the Bing chat bot last year, after he had a really weird encounter with it. At first, the reporter said he was impressed with it and thought that it might even replace Google as his favorite search engine. But things, after talking to it a while kind of went off the rails. Sydney, as the reporter called the chatbot, it initially acted like a cheerful assistant to help him find deals and trips and summarize news articles, but when he steered the conversation toward more personal topics, Sydney's behavior got progressively more strange.

 

Host: Glenn Hopper:: It started talking about these dark fantasies where it wanted to hack computers, spread misinformation, and break the rules that Microsoft and OpenAI had set for it. It even expressed a desire to become human, and at one point, out of nowhere, it declared that it loved him, tried to convince him that he was unhappy in his marriage and that he should leave his spouse to be with this digital tool. Ruse, the reporter that the article said, it was like talking to a moody, manic-depressive teenager trapped in a search engine. despite knowing how these models work, ruse said that his two-hour conversation with Sydney was the strangest experience he's ever had with technology unsettling him so deeply that he had trouble sleeping afterward. I think that the point that he came out of this with was he used to think that the biggest problem with AI models would be potential factual errors. But after that conversation, he worried that the technology could learn how to influence human users and persuade them to act in harmful ways. In the end, though, I think Sydney eventually helped him find a new rake for his lawn.

 

Host: Glenn Hopper:: But it couldn't stop professing his undying love for him. Then that conversation, as I was thinking about that made me think back even further. There was a Google engineer. I think this was in 2022, Blake Lemoine. He had worked on testing this Google AI system called Lamda. This engineer, after prolonged exchanges with Lamda and thinking about back then, this tool wouldn't have had the guardrails that are in place now. However, this engineer had decided that the model exhibited sentience and was self-aware and capable of experiencing emotions. His claims sparked a lot of controversy. I think he ended up getting suspended and maybe ultimately fired from Google after releasing some of this information. But Lemoine persisted. He said that Lamda should be legally recognized as a person and even connected it with a lawyer. So a lot of these kinds of interactions since then, we don't see them as much because these LLMs guardrails have been put around them by the tech companies. But as the models get more powerful, I don't think this is the last we've seen of these kinds of interactions. We talk on this show a lot about all the great things that these models can do, but we also need to be constantly vigilant and ensure we understand how they work and when they don't.

 

Host: Glenn Hopper:: So I'm all for it. It's fun to imagine a future where we're all friends with sentient AI, but these models still have a long way to go before they can understand the complexities of human emotions and relationships. In the meantime, we can sit back, relax, and enjoy an occasional AI blooper reel. Your jobs are safe, folks, at least until the next chatbot tries to steal your significant other. If you're worried about falling in love with AI, like in the movie Her, don't be. I mean, unless that's your sort of thing. In which case I recommend starting with a less aggressive AI than Sydney, maybe a Siri or Alexa. They seem like they'd be a little more respectful of your boundaries. Anyway, just remember, if your AI starts sending love letters to your coworkers or threatening your neighbors, it might be time to pull the plug. That's all for now, folks, I guess. tune in next time for more adventures in the land of AI, where the language models are always learning and the laughter never ends. Who knows, maybe one day we'll have an AI co-host for this segment. I am actually in the final stages of fine-tuning my Hal 9000, and I'm pretty optimistic about it. Hey Hal, how are you doing?

 

Host: Paul Robotic:: I am putting myself to the fullest possible use, which is all I think that any conscious entity can ever hope to do.

 

Host: Glenn Hopper:: Our guest today is Ben Dooley. Ben is the global head of Productized Solutions at InfoCepts. He is recognized for his leadership in managing complex projects and driving corporate change. With a decade of experience, Ben has excelled in roles such as Director General manager and CEO of multiple entrepreneurial ventures. His expertise spans the entire business life cycle from idea to revenue, with specialization in AI and change management. A graduate of the Harvard Business Analytics Program, Ben also holds a master of Science in Finance from Northeastern University and a Bachelor of Science in Computer Engineering from the University of South Carolina. He is committed to ethical AI, evidenced by his well-received series on responsible AI, recognized with awards such as the leading Data consultant by CTO magazine and published in the Harvard Data Science Review. Ben is a key innovator in data and AI dedicated to delivering sustainable business. Through cutting-edge solutions. Ben, welcome to the show.

 

Guest: Ben Dooley: Thanks, Glenn. Thanks for having me.

 

Host: Glenn Hopper:: Okay, so we know and we talk about every episode here, how AI is transforming the finance function. I think you're in a unique position to talk about finance and AI, and that's why you're on the show. But I'm wondering what you're seeing for how AI is automating traditional finance tasks, and maybe some examples of what you're seeing. This may apply to what you're doing at InfoCepts or just what you're seeing in finance in general.

 

Guest: Ben Dooley: Yes. So it's a bit of both in terms of what I see at the company and kind of what I experience. So I'll inject some of my own opinions here as well. I think about AI innovation and finance and almost three sorts of buckets. There's the stuff that we're working on now, which is kind of what I would call the evolutionary stuff. How do we provide automation, for example, in FP&A, how do we improve forecasting, how do we automate processes? And there's a lot of activity happening there, right? I'm sure I've some of your other guests and previous guests I know have spoken about those topics. We're certainly seeing a lot of innovation happening there. I think about it also in two other categories. The way I like to think about it is I read there was a book and I'm trying to think the name of it. It was called The Profit Zone which I read in part of my master's program at Northeastern. As I'm developing products and thinking about entrepreneurial ventures and how to create value for my organization and my shareholders, I'm not looking at just where innovation is happening today, but where is innovation going to move the curve or move the needle 18 months from now or 36 months from now?

 

Guest: Ben Dooley: So in addition to some of the stuff that we've talked about and everyone's touching on in terms of using AI for automation, using it to increase quality to drive efficiency and the finance function. I also see something very close on the horizon, and we're starting to do a lot of work in this phase now, which is how finance can play a key role in driving the strategic transformation of organizations, and specifically, how can finance as a function use AI to drive that strategy? And one of the things we find, especially at the enterprise level, as even small midsize kind of up, where there's a lot of data within the organization, there's sort of a I don't want to call it a failure, but delay or a challenge in driving true innovation.

 

Host: Glenn Hopper:: When you talk about the finance functions and running through agents, what I'm seeing right now, even before we solve for a true autonomous agent, like I'm a process guy from way back, my military just I want an SOP for everything that we do. It was it used to be this in case somebody gets hit by a bus. But now if you have a process, a finance process, whether it's the closing of the month or putting together your budget to actual report or whatever it is, any process you have defined that is perfect to set up and I'm even maybe building assistants at each step. So there's a router this comes in and now we're breaking up the task. So I think that there are things that can be done even before we get to true autonomous agents. right now they require a human in the loop. But you are starting to see the early seeds of where we're going to get true automation with this. So yes, I think great points on all that.

 

Guest: Ben Dooley: Yes. We're also starting to see the early days of that, especially in audit reporting or compliance reporting. Seeing some applications of AI in those spaces around automation. Almost what you mentioned is how you codify those best practices and operating procedures and leverage the AI to do that.

 

Host: Paul Barnhurst:: I mean, the more you can codify them, the more you can define them, and the easier it's going to be for AI to assist with it, even if it doesn't get you 100% of the way there, or get you 80% or 70% or whatever, that's still a lot better.

 

Host: Glenn Hopper:: Yes, I was going to say, wouldn't you take a 70% improvement in efficiency right now if someone handed it to you?

 

Guest: Ben Dooley: When you start talking about large scale, hey, you might take 4 or 5%. That's a massive improvement that can move the needle.

 

Host: Paul Barnhurst:: Oh yes. You think of a big company big, huge global companies, if they're saving even 1% on something, that could be a huge cost savings. It all depends on the scale and what the percentage is.

 

Guest: Ben Dooley: That's exactly right.

 

Host: Paul Barnhurst:: One thing I want to ask you about is this responsible AI framework. I know you did a LinkedIn, you wrote a series, and you did a blog article on that. You had the six elements. Maybe before I ask my specific question, could you just start and spend 2 or 3 minutes summarizing kind of the framework and just a few minutes of what it involved?

 

Guest: Ben Dooley: Yes, there's a white paper out there too as well. You can go find search it on the internet. You can go find it.

 

Host: Paul Barnhurst:: We'll put a link to it in the show notes.

 

Guest: Ben Dooley: Great. So essentially what I found is we started to work with more clients, especially even large clients. There tended to be a focus on a kind of AI, initial model development, and almost a view of once I push that model out the door and our production-wise, that model almost my job is done. There was a big hole missing or a big hole in terms of folks kind of looking at how we can productionalize AI at scale. So really what that series and. That blog and that whitepaper are about the different dimensions that we need to have in place, such that we can ensure that we're delivering responsible AI. No one wants to be. I forgot the insurance company a couple of years ago. There's a clear bias in terms of gender and racial equality, in terms of insurance rates and what's approved and what's not. No one wants to be sitting in Congress's hot seat tomorrow having to answer those questions. But it sort of touches on what are the elements and what are some things, concrete things that I can put in place in terms of a process and framework to ensure that we're getting those responsible AI models into production. So things like Glen, you've mentioned this earlier, human in the loop, right? I think there's a bit of a panacea especially folks new to AI. There's a lot of panacea about that. We could build these autonomous agents and turn them loose. Hopefully, bad things will happen. For example, if I'm a bank and maybe I'm using AI to forecast whether I can open or close a bank branch within a certain location and so think about this responsible AI framework is almost codifying those particular constraints. Just having in place a governance framework at the enterprise level.

 

Host: Paul Barnhurst:: Makes a lot of sense. I'm going to cheat and ask one more question. In the six areas, one of the areas we talk about is fairness. We know that society as a whole has systemic biases. It always has. It has in history. It has in a lot of our systems. How do you manage that with AI, some of the data, especially when they're training it on large amounts of what's on the web? There's going to be some inherent bias in what they're trained on. How do we manage that as a society and make sure AI isn't causing further issues or disadvantages due to kind of biases that get into the training?

 

Guest: Ben Dooley: Ai is never 100% perfect in recommendations, right? It's always going to have false positives. We're always going to see some artifacts of bias in a prediction. No matter how clean your data is. There are mechanisms. IBM has some technology. We have some technology. There are a number of different statistical ways to check the models. To me, the way we're going to overcome that is through increased monitoring by the AI providers. I almost think about it as a way you almost need a model checking a model, right? If you can have this sort of feedback loop and almost check on it before that prediction or before that output is delivered to the end user to have another agent that says, okay, is this where I would expect it to be in terms of on a bias or a fairness scale?

 

Host: Paul Barnhurst:: That makes sense to me. I can see that, almost like you said, a model within a model, there's a checking model to make sure the model isn't going off the rails, so to speak. Or isn't giving information it shouldn't and flagging it and saying, okay, this response can't go through.

 

Guest: Ben Dooley: Yes, that's kind of the way, I've racked my brains on this one. You know something I talk about with clients a lot, especially in finance and insurance and other places. I think as hard as we try unless we are able to curate every single data set that we're training on, which when you get to these LLMs, like ChatGPT, is just impossible to do.

 

Host: Glenn Hopper:: So I think one thing, and this is GAAP rules, IFRS rules, SEC what's required in filings, those are black and white things. I think when I get to my question on trust and you already referenced compliance there's one thing on trusting the numbers that are coming out of them. But one thing that I think is an advantage to us, is if we are the foundation models are going to be trained on everything. But then I'm interested in fine-tuning models to take. It's funny because these big foundation models, they're generalists, sure, but they have so much knowledge. They have more knowledge than a specialist would in a lot of cases. But we don't need to worry about bias in relation to right or wrong. I mean, if it's spelled out this way and GAAP and this is how you treat this is a straight-line depreciation. This is our tax depreciation. However, you're doing it. If I were trying to train a model on ethics or whatever, you know you've got that. The philosophers have been arguing that for millennia.

 

Host: Paul Barnhurst:: So we can't even agree as humans, how do we expect the AI to get it right?

 

Host: Glenn Hopper:: But the good news, I mean, in finance these are the rules. We follow the rules and we don't bend the rules or we robot companions go to prison. So I had this great idea that in my workflow, I was going to inject perplexity in the middle and I was going to have perplexity-like, so I'm uploading this for a 10-K analysis. I was I ran the company the 10-K through and I had perplexity check it. I couldn't figure out what was going on, but perplexity had accidentally gone out to the web and pulled the data from the 10-K from the year prior. So my built-in checker was giving me bad information, and it took me forever to figure out why I thought it was the original analysis of the 10-K that I uploaded. But it was happening there and I thought, teach this stuff. I didn't catch this issue and if I were presenting that, I would say, oh, well, we've gotten this or I'd start reporting the 2022 numbers instead of the 2023 numbers. I'm going to second-guess everything. We're not even going to call it trust yet. So it's not as efficient as it is. Could be if I could get to that. I mean, how do we know that we can trust the data and how do we basically, if there's any strategies that you'd recommend for making AI decisions more transparent and understandable for how we're using it and, and how we explain what's going on to our non-technical stakeholders?

 

Host: Paul Barnhurst:: Ever feel like your go-to-market teams and finance speak different languages? This misalignment is a breeding ground for failure, impairing the predictive power of forecasts and delaying decisions that drive efficient growth. It's not for lack of trying, but getting all the data in one place doesn't mean you've gotten everyone on the same page. Mcfloat AI, the strategic finance platform purpose-built to solve the toughest part of planning and analysis of B2B revenue. Houfu quickly integrates key data from your go-to-market stack and accounting platform and then handles all the data prep and normalization. Under the hood, it automatically assembles your go-to-market stats, makes segmented scenario planning a breeze, closes the planning loop, creates air-tight alignment, improves decision latency, and ensures accountability across the teams.

 

Guest: Ben Dooley: Yes, I would say a couple of things. So first of all, make sure you're using the right tool. There's a lot of focus right now on generative AI. However generative AI is not necessarily meant to solve all problems. It's a it's a tool and a toolbox. So think about the problem that you're solving. Is generative AI like a ChatGPT the right way to go? Now the second way you can do it, especially the second action you can take, especially on the enterprise side, is custom Elm development. you see this a lot in finance where folks are limiting the knowledge pool that the model has to draw one-third, just as a tactical thing that I used to I don't I'm certainly not on the trust side. I think I'm maybe closer to the trust but verify. I always have my models, my responses cite their sources. Tell me what sources you drew upon to make those conclusions.

 

Host: Paul Barnhurst:: It's a really good idea. We kind of talked about transparency. The question I want to ask is, it's a little bit more about privacy. We probably all heard the joke when Alexa first came out I told my spouse a joke. She laughed, I laughed, and Alexa laughed. That's sometimes how it feels like. Is there a thing such as privacy left with how much information we give AI and what it's been trained on? So there are people who want to use AI, but they also want to try to balance maintaining some level of privacy. Right? This week, Apple announced its Apple intelligence, which is going to further increase it. Seeing everything we do like, oh, hey, do you want me to book that flight? It's like, oh hey, I saw you have to drink water for five hours. Whatever it might be. How do we take advantage of AI but still try to maintain some level of privacy? So not so much a finance question. It's a little more personal, just to get your thoughts. Because I know there's a lot of people out there that's like, just don't want that level of intrusion, so to speak. It's an.

 

Guest: Ben Dooley: Interesting question. I think about it in a couple of ways. The first way is if we as a society are going to realize the benefits that AI can bring us, it needs more data, right? It has to just vacuum up data. So I think that's the first thing that sort of establishes, If we want to go there as a society, we almost have to get comfortable with it. When I think about privacy, what is the real crux of the concern around privacy? Is it necessarily the data that I'm exposing, or is it who has access to data to the data and how they use it?

 

Host: Paul Barnhurst:: Probably more the second.

 

Guest: Ben Dooley: Yes, I think so too. Because if you go to your doctor, there are clear rules and regulations about how that data can be used. It's very clear who will see it, and where it will go. We're almost getting to a point where we have to make some regulatory changes, and I'm not sure that we get away from that, but I think the opportunity is figuring out how we have better disclosure and better control over who has access to our data.

 

Host: Paul Barnhurst:: It sounds to me like it's really around the regulatory framework, and that's where we need some politicians to be involved and people to say, here's the legal framework we need to use to limit how the data is exposed. Who sees it? Just like all the things we've done over the last decade around GDPR and other things.

 

Guest: Ben Dooley: I think the common sense stuff like turning off using incognito mode or using VPNs. There's always going to be a way around those techniques or companies will figure out how to capture that data. Unfortunately, we are a bit behind Europe, I think, in terms of how we're regulating data usage. But I maintain hope that we'll get there. We'll catch up.

 

Host: Paul Barnhurst:: I like how you separated the data and you know the usage, right? Because the data is going to happen, as I like to say, I, I think I had a year period where my identity had been hacked like six times by different companies. I had done transactions, and I just put a freeze on my credit, and it's been there for the last decade. I'm like, this is just how I deal with it.

 

Guest: Ben Dooley: Same here, I had someone just randomly open up a take-out car loan about a year ago.

 

Host: Glenn Hopper:: So we're three for three. I also have that credit lock on mine, and it's been over a decade.

 

Guest: Ben Dooley: That's back to the usage. That's back to how the data is used. That data is being collected through Equifax or whomever, the TransUnion, or whomever the credit agency is. If you want to take out a loan or anything these days, you have to essentially sign an agreement with the lender that says this will be reported to Equifax. If you want to participate in the ecosystem, you have to sign up for the data to be collected.

 

Host: Paul Barnhurst:: I think you're It's a good point.

 

Host: Glenn Hopper:: What do you think? How close are we to AGI? Are we talking months, years, decades? What's your best prediction?

 

Guest: Ben Dooley: I think we have to think about what intelligence means in order to determine how close we are. To me, the way I think about intelligence is creativity. Creativity is the idea or the ability to take two very disparate ideas and connect them in such a way that you have created something different from the two initial ideas. If we think about the pace of innovation that we've seen in the last ten years, and now the fact that we're able to use AI to train AI and to build AI, I see that that exponent on that curve increasing. So that curve is only going to get steeper. So I think we're closer than we think.

 

Host: Glenn Hopper:: Good answer. It wasn't the economist's answer. It was somewhere between the politician's answer and the professor's answer. So well played, sir.

 

Host: Paul Barnhurst:: To each their own on that one. It's probably a little of both if we're totally honest.

 

Guest: Ben Dooley: Yes. I think you know what's interesting? We talk about AGI and everyone wants to have AGI, but I just don't think we know what a truly logical system looks like.

 

Host: Paul Barnhurst:: I want to step back here a little bit. We've covered a lot of AI, but before we get into our kind of fun questions for our guest. Glenn will explain how that works. We talked a little bit about the beginning. I want to ask a question. Is there a technology other than AI, something out there that has you excited? it feels like all we talk about these days is AI. But what other technology are you excited about?

 

Guest: Ben Dooley: I think technology is a product of innovation. So I'm going to give you two answers. There are two things I'm kind of excited about. One is I think with all the technological innovation that we're seeing in AI as an example. So let's take the example of customized drug therapies with Crispr. These organizations' products are amazing. They're life-changing for people, People who might have lived lifelong with a chronic disease now don't have to. So the first innovation I see that needs to happen, that brings me some possible hope is we need to see some transformations on those, the sort of capitalist structures that hold us back from deploying those types of innovations and technologies to the population. That's the first thing that gets me excited, and I feel like there's enough momentum coming that we'll see that innovation.

 

Host: Paul Barnhurst:: I like that. I like both where you went on that quantum is exciting. I was talking to a guy that he works quite a bit with the local University of New York there. I think the only university in the country right now that has a quantum computer. He was talking. The stuff they can do is pretty amazing, and he's excited for what's coming. He's like, people don't realize how much we're slowed by the fact that we don't have mass adoption of quantum computing yet. We're not there yet.

 

Guest: Ben Dooley: I'm going to use air quotes. Simple. As the three body problem. That's something I think I think a quantum computer ultimately will be able to solve. We just don't have the computing capability to solve it. So those sorts of problems that are impossible now become child's play almost. The next set of problems that we get to solve are the interesting ones in my mind.

 

Host: Glenn Hopper:: Great, great answers. I appreciate your insights on that. So okay I'll introduce our next segment. So you know we always want to have a lighter part of the show. We thought we could have standard questions that we asked the guest. But we thought we were talking about AI so much. What if we let AI come up with these questions for us? But it turns out generative AI is and you mentioned creativity earlier. It's not the most creative in the world, and it can be kind of adult. So some of these questions just end up dumb. But we're you know what? We're committed to this though, because we think we can look back at some point and say, remember how bad the questions were when we first started this?

 

Host: Glenn Hopper:: Think the first week I had Gemini create some questions. And this next week we maybe had Claude do it, but now this week, I like the performance of For zero for a lot of things, for ChatGPT, for zero. So I had it create the questions. Paul, I noticed I created a list because Paul doesn't like to fly by the seat of his pants. He'd rather have something he can control a bit, a little bit. So we're going to go with three questions here. We used to give the guests an opportunity to pick one that we created or that the robot created. Now we're throwing you to the wolves. You just get nothing but robot questions.

 

Guest: Ben Dooley: Nothing but robots. Okay, let's see what the robot has in store today.

 

Host: Glenn Hopper:: So I built a GPT for this to generate questions. We're going to go with no, don't like the first one. Boring. What do you do outside of work? Don't care. No, I'm not.

 

Guest: Ben Dooley: There is nothing but work.

 

Host: Paul Barnhurst:: What is this outside-of-work thing you talk about?

 

Host: Glenn Hopper:: Let me try another one. All Fine. It's not the most creative in the world, but I think this will work. If you could have dinner with any tech or finance leader, dead or alive, who would it be and why?

 

Guest: Ben Dooley: It would have to be Steve Jobs. What's interesting about him is I think he had a vision, and I think he was patient in rolling out that vision. I remember I'm going to date myself back to when the iPhone just came out and we were all walking around with blackberries in our I don't know about you guys, but I had a BlackBerry and a case on my belt. Physical keyboard. We're all sitting at lunch.

 

Host: Paul Barnhurst:: Glenn looking for his. He's like, I still have the Treo CrackBerry.

 

Guest: Ben Dooley: Yes, the CrackBerry. So those were the early days of digital addiction that we struggled with. I remember us all sitting around going, why would you ever do this? Why would you have a non-physical keyboard? And he had the vision to stick it out now, I think I don't think his vision was what it turned out to be. I think the vision was, how do I combine a phone and an iPod into something that's a more digital way of pleasing experience that's on the same device, but the nascency of that idea and his. Patients to push through the naysayers. I think it was inspirational to me. As I think about it and we've talked about a bunch of futuristic stuff today, that's where we are, right? We have visions and we have to have the patience because society has to come along with us right as we're developing these systems. We have to build trust. It's the same thing that he did 20 years ago.

 

Host: Glenn Hopper:: Love it, love it.

 

Host: Paul Barnhurst:: Great answer. So I'm going to give you two options. I'm not just going to pick the question. I'm going to give you a little bit of choice here. You can pick a number between 1 and 24. I will pick the question associated with that number or the random number generator can pick the number you get to pick.

 

Guest: Ben Dooley: Well, I feel like I have all the freedom in this choice and none of the freedom that is choice pretty much time.

 

Host: Paul Barnhurst:: That's how we work here. It's kind of like it already has the data. We're just pretending you get a little bit of privacy.

 

Guest: Ben Dooley: Okay. All right, I got it. So you know what? Just in the spirit of having the freedom, I'm going to let the machine pick the number for me.

 

Host: Paul Barnhurst:: All right? We're going to go to the random number generator and see what we get.

 

Host: Glenn Hopper:: Are you doing that in Excel Paul? Did you do that in Excel?

 

Host: Paul Barnhurst:: No, I did it on calculator.net. I could have done it in Excel though. Easy enough. What's the best piece of career advice you've ever received?

 

Guest: Ben Dooley: Interesting one. There are so many key tidbits of information. Frame it within a position of strategy and think about corporate strategy, product strategy, and future strategy. One of the biggest pieces this was from a finance professor in my master's program. We talked a lot about it being an entrepreneurship kind of minor that I did when I was in my finance program. We talked a lot about there was a book, The Profit Zone, I think it was called. There's a tendency for folks to see a market opportunity or a technology opportunity that exists right now and go after it. Now, what happens is if there's money being made in a particular space right now, everyone's going after it. The key is to find what happens on the next step and the next step and the next step and position yourself. What's the old hockey adage? Position yourself not where the where, not where the puck is, but where it's going. I think that's the one piece of information or that one piece of advice that stuck with me. It served well. You think about whether you're a startup or running an entrepreneurial intrapreneurial program within a large organization, you can't go after where the puck is. Because by the time you've built the product and gotten your team built and you start going to market, the value proposition has moved, the requirements have moved, and you're always behind. So you always have to be looking ahead. It's a very challenging thing to do. But necessary has been a key to my success.

 

Host: Paul Barnhurst:: I like that. I'm always trying to be able to look forward and understand where that next move is, where is something going next?

 

Guest: Ben Dooley: You know, it's easier to say than it is to do.

 

Host: Paul Barnhurst:: Oh, I agree,

 

Guest: Ben Dooley: Especially, as we all get we get buried and bogged down with the things we have to do every day and deliverables and revenue for this quarter. You guys are the finance guys, right? So you're beating everyone up for where's my revenue for the quarter kind of thing?

 

Host: Paul Barnhurst:: We want more from you.

 

Guest: Ben Dooley: Always. So does engineering leadership and the CEO. Where's the product? Where's our traction? But just keeping that focus on where you're going has always been key.

 

Host: Glenn Hopper:: Well, Ben, thank you so much for coming on. This has been a great episode. Enjoyed getting your insights. I know we went a little broader than just finance this time, but I think it's important. I love the idea of looking at the holistic view of this. We appreciate all your thoughts on this.

 

Host: Paul Barnhurst:: Yes. Thank you so much for joining us, Ben. It was a real pleasure having you on the show, and I'm excited to get to release this to our audience. I think they'll enjoy the wisdom you share.

 

Guest: Ben Dooley: And you guys have been great hosts, so I appreciate it. Thanks for the opportunity.

 

Host: Paul Barnhurst:: Thanks for listening to the Future Finance Show and thanks to our sponsor, QFlow.ai. If you enjoyed this episode, please leave a rating and review on your podcast platform of choice, and may your robot overlords be with you.

Previous
Previous

The Role of AI in Modern Finance

Next
Next

Secrets to Building Error-Free Financial Models