Unlocking the Power of AI for Finance Professionals With Jon Brewton
This episode of Future Finance features Jon Brewton, the founder and CEO of Data2, a company that provides AI and machine learning solutions for finance and other industries. He discusses how his company helps clients use AI to analyze data and make better decisions, as well as the challenges and risks of using AI in high-stakes contexts.
Jon Brewton is an expert in AI and machine learning, with a rich background in operations management, digital transformation, and engineering. He has worked at major energy companies like BP and Chevron, and he is a US Air Force veteran and a graduate of some of the top universities in the world. He shares his insights and best practices on how to leverage AI and machine learning to enhance financial performance and outcomes, and how to avoid common pitfalls and errors that can arise from AI applications.
In this episode, you will learn:
· What Data2 does and what its goals and principles are in AI and machine learning.
· Why traceability and explainability are crucial for AI applications in high-stakes industries like finance.
· How Knowledge Graph technology can enhance financial data analysis and provide more context and meaning.
· What AI hallucinations are and how they can affect financial decision-making.
· How to integrate advanced analytics and AI into your finance workflows and processes.
This episode is a must-listen for anyone who wants to learn how AI and machine learning can help them make smarter decisions. Listen to the episode now and discover how you can use these technologies to your advantage.
Follow Jon:
LinkedIn: https://www.linkedin.com/in/jon-brewton-datasquared/
Website: https://www.data2.ai/
Follow Glenn:
LinkedIn: https://www.linkedin.com/in/gbhopperiii
Follow Paul:
LinkedIn: https://www.linkedin.com/in/thefpandaguy
Future Finance is sponsored by QFlow.ai, the strategic finance platform solving the toughest part of planning and analysis: B2B revenue. Align sales, marketing, and finance, speed up decision-making, and lock in accountability with QFlow.ai.
Follow QFlow.AI:
Website - https://qflow.ai/future-finance
In Today’s Episode:
[01:50] Introduction to the episode
[02:29] Opening remarks by Paul Barnhurst.
[04:54] Why your job is safe for now
[13:50] Introducing Jon Brewton and his extensive background.
[14:51] Welcoming Jon Brewton to the show
[15:31] Jon shares the story and mission of Data 2.
[18:47] Discussing the critical features for financial institutions.
[25:11] Exploring advanced applications and their benefits including generative AI and knowledge graphs.
[34:24] Addressing AI hallucinations and how Data Squared tackles this issue to ensure reliable outcomes.
[41:41] Advice for finance departments on adopting AI technologies.
[46:28] Jon shares his most rewarding job and a cause he's passionate about.
[48:54] Final thoughts and closing remarks.
Full Video Transcript
[00:00:04] Host: Glenn Hopper: All right. I was going to say if the recording didn't go. I'll just, uh, we could do the whole episode without John. We could just, um. I'll say, remember when we talked to John and he said this, and, um, I like it, I like it.
[00:00:16] Host: Paul Barnhurst: Maybe we'll try that for an episode one time. That could be kind of fun. Not today though. I think we have, uh, commitments.
[00:00:22] Host: Glenn Hopper: Yeah, yeah. No worries. All right. Um, all right, I'm going to go ahead and get started and try to do this. Uh, read through and then and and then we'll be off to the races. So cool. All right. Welcome to Future Finance. Our guest today is John Bruton. John is the founder and CEO of Data Squared, a leader in machine learning and AI solutions with a background spanning operations management, digital transformation and engineering. John has held key and John has held key strategy and technology. Good Lord John has held key strategy and technology development roles at global energy giants BP and Chevron. His work has led to significant operational efficiencies, including $500 million in savings at Chevron's Southeast Asia division. John holds degrees in operations management, industrial and systems engineering and an MBA from NYU stern and the LSE with executive education from HEC Paris, Harvard Business Analytics Program, MIT's Csail, and Stanford Sale. A US Air Force veteran, he combines military precision with techs with tech expertise. At Data Squared, John is pioneering AI solutions, emphasizing traceability and explainability, which are crucial in building, uh, oil and gas solutions that are heavily, uh, reliable, traceable and explainable. Just a little bit about the company in the background. Um, it was started with four people. Uh, there's three common traits in the founder pool. Uh, we either served in the military. I'm an Air Force vet, like you mentioned earlier. Uh, our co-founder, Eric Costantini, is a marine. Um, and our other co-founder, Chris Rohrbach, is a retired Navy Seal commander. He did 26 years as a Navy Seal commander, Seal team two, six, ten, in a variety of other positions.
[00:04:06] Guest: Jon Brewton: And Jeff Dauglish, our chief technology officer. We worked together at Chevron for several years, specifically in digital solution development and deployment at scale within the company. So our three common traits are we either studied together Eric and I at Harvard, Chris and I at NYU and the London School of Economics. And, um, Jeff and I, we worked together. Uh, Eric also has a mining background, uh, at least in the energy sense. And we served in some capacity. So those are really the three common traits and the things that really shape our perspectives and the things that we focus on on a day to day basis, primarily, uh, energy and defense. But what we started with at Data Squared gave us an opportunity to really start to look at how we could design something that's industry agnostic. And the thing that we've built to this point, which we'll get into a little bit more later, um, is exactly that. I think it's particularly and uniquely positioned to exceed and exceed exceptionally well in places like high reliability sectors, you know, specifically places like engineering, defense, healthcare and finance. You know, these are places where the stakes of these environments are exceptionally high and rapid. Ability to analyze information can really be the difference between success and failure.
[00:05:30] Speaker1: Great.
[00:05:30] Host: Paul Barnhurst: Appreciate that. Overview. I'm feeling a little, uh, left out in this group. Everybody but me is a military veteran. I will say, though, I did work on a Navy base for four years and I was employed by the government, but I wasn't enlisted, so that's as close as I can get.
[00:05:46] Speaker1: Yeah.
[00:05:47] Guest: Jon Brewton: No worries.
[00:05:48] Host: Paul Barnhurst: I like it. You talked about traceability and explainability. I'd love to kind of dig into that a little bit around AI. How crucial are those future, those features for financial institutions when implementing AI. You know, making decisions decision making process. Why is that traceability and explainability so important?
[00:06:12] Guest: Jon Brewton: Well, for us I think it's important for a multitude of reasons, but it's really informed and directed by the industries that we're targeting to play in most. And it aligns with our subject matter expertise, our background and the things that we're passionate about. And we understand well. But these are industries that have a preoccupation with failure, a reluctance to simplify. They're sensitive to operational changes and environmental changes. They have a commitment to resilience. Think of the nuclear navy that will ring true to Glen and a deference to expertise. You know, these are things that are cultivated traits of the environments that we need to play in. So if those are the minimum hurdles that we have to actually achieve to sort of operate in these spaces and operate with scale, well, Well, then traceability and explainability are absolutely crucial for understanding how to implement these tools and getting and generating value out of these tools in a scalable fashion. I don't see how that's much different for financial institutions or financial markets whenever you start looking at these things. Every decision that can be made by AI should probably be understood fundamentally, so it needs to be traceable, needs to be verified. So it needs to be trustworthy and explainable. And these are really essential for maintaining trust. And I think in general compliance in these highly regulated high reliability environments. So those are really important things that I think are absolutely crucial to the environments that we want to play in and the environments that you obviously thrive in.
[00:07:45] Guest: Jon Brewton: When I think about how it applies to financial markets, I think about it in the context of regulatory compliance. You know, we're trading, we're selling, we're acting on some of these things within these markets. There are a lot of regulations that we have to follow. We have to adhere to. We have to manage, you know, outcomes to as a byproduct of that, I would expect that traceability, transparency and explainability are almost imperatives for how you would deploy these things and deploy them at scale. I look at risk management, you know, by ensuring that we have traceable decision making processes and explainable outcomes, financial institutions and better understand and mitigate risk. I think that really helps. I think in general, from an explainability perspective, identify potential biases in the system and understanding how things could potentially be overfit and lead you down paths that aren't necessarily the right paths to go down at any point in time. So you need to have the ability to dissect this information in a way that creates an opportunity to understand it at its core, fundamental level, and why these things were generated. I think that's a bit of something that we miss. The other areas that I think are really important just from a financial perspective, are stakeholder trust and fidelity in decision making. You know, we operate primarily in the engineering construct and in the government construct. And there's a couple of key environments, you know, if you're going to enter into the kill chain. Not to get morbid at all, you need to ensure that you have a very clear understanding of how these decisions, or at least these suggestions, were provided to you.
[00:09:25] Guest: Jon Brewton: What's the underlying context? How valuable or how connected is that context? How can we quantify the impact of the decisions that we may make and any collateral damage? Same thing for contingency modeling in big, large scale oil and gas operations. If you think about some of the projects that I manage for Chevron, some of these were up to $2 billion. They spanned five countries and had staffs of over 500 people. And if any one piece of this puzzle was designed incorrectly, well, like it interjects a level of failure that is almost intolerable. And so these environments really thrive on that. And so financial markets I don't see as being any different at all. As a matter of fact, I put them on the same level with the environments that I spoke about before. And health care is another area where if we are going to deploy these tools, we need to ensure that they're explainable. And that's a really hard nut to crack within the market. As we sit right now, these tools commercially as they're available, have a lot of interjected bias because of the way they work their statistic models. So it's like, what is more likely than not to be the answer that you're searching for. And in our spaces, I'm not exactly sure that that that really rises to the level that it needs to.
[00:10:43] Host: Paul Barnhurst: So you're saying your level of confidence in a kill chain is much, much higher than me who's writing a post on LinkedIn type of thing, right. The statistical fact, yeah, there being an error has to be magnitudes higher.
[00:10:59] Guest: Jon Brewton: Agreed. Yeah, I think so.
[00:11:02] Host: Glenn Hopper: And that's, you know, when you're talking about AI and machine learning and data, you know, everybody right now wants to talk about generative AI. And that's the latest buzzword. And it's the most approachable and accessible. But the funny thing is most people who are using that don't understand what it is or really how it how the the generative AI is giving them the result. But when in what you're doing and certainly, uh, for for military applications and for, uh, applications that are, uh, you know, of great consequence, um, it's important that you have that explainability and understandability and understandability of the models. And I think that's something that we really need to think of. And I think that for these kind of Johnny come lately, people who've, um, just they haven't used AI in any capacity until LMS made it accessible to them. We're talking about completely different ball games here. And, um, but that said, I mean, there there is an application for generative AI, and I'm trying to find ways. And that's one of the things that I love about this show, is having people who are at the leading edge with integration of AI into workflows and how we can apply it.
[00:12:13] Host: Glenn Hopper: And one thing I love about the show is having guests on like You, where I can ask really geeky questions and get a good answer on it. Um, so I'm thinking about generative AI now, and I'm really I'm pushing, you know, I do R&D for products in the finance marketplace, and we have to overcome a lot of the issues that that, that you're addressing as well. And one of the workflows I'm looking at, I've tried, you know, overcoming hallucinations with Rag and that has limited success. And I don't know if you guys are working with this now or not, but one of the areas I've been looking into lately is, um, graph databases, knowledge bases with Rag. Um, and I, I'm, I know it's not necessarily something you're doing today, but I'm wondering, how could you potentially see Knowledge Graph technology being applied to financial data, you know, to help us uncover hidden correlations or insights that traditional analytics might miss?
[00:13:13] Guest: Jon Brewton: Yeah. Um, it is something we're doing today. I'll explain the nuance of what we're doing. So our system review actually starts and ends with Knowledge Graph data models. So it is a fundamental part of our process, our workflow and how we actually store ingest structure information. Um, for me personally, I believe that you have a couple of different variations. And like you said, I think traditional Rag methods have a good efficacy rate and good is being generous. Like the average returns range anywhere from 30 to 60% on efficacy in terms of how structurally fit the answers are to the things that you're asking. What's being interjected from the models? When you use Knowledge Graph data models, you almost create a roadmap for how you can structurally look at the connections and information and the strength of those connections. And so I personally believe that if we are going to be able to scale generative AI into different places, and let's just say it doesn't have to be 99.9% right? From an engineering or finance perspective, maybe 80% is good enough. Well, graph can kind of get you there. So what we're doing is essentially using that same fundamental structure and the fundamentals of how you do that. But we're doing it on steroids and growth hormone at the same time.
[00:14:35] Guest: Jon Brewton: Think Barry Bonds uh, 73 homer season. That's effectively what we're doing. So we're using the same fundamentals and context for, you know, process how you get there. Um, Graph Rag get you from a starting point anywhere from 60 to 65%. And if you continue to contextualize those models, you can get closer to the 80% mark. So you can get a lot of fidelity out of the models just right away, because you can see the inherent connections between node one, node two, and the strength of those relationships. Um, what we're doing is essentially ratcheting up how we do that. Same principles just ratcheting up. So we add the these hyper semantically embedded layers of information. And we also test information against every other piece of information in the data model. So essentially we prioritize data ingestion, compression, structure and testing on the initiation of what we do with a client. So we'll take their information. We'll do that. And we have a proprietary process for how we do that. And then we start to look at enrichment. It's like what do we understand about the domain problem we're trying to solve. What are the typical questions we'd want to ask, and how can we understand the connections of this data model in the context of those questions? That's simplifying things.
[00:15:57] Guest: Jon Brewton: But I think the other big difference in what we're doing is usually when you look at Graph Rag, especially from a research perspective, people are doing that on unstructured data. So people are really having a conversation about words. And these models are great with words. I mean, fundamentally that's exactly what they're built to do. Um, but when you start looking at structured data or high frequency data or even semi-structured data, the performance of these models starts to fall apart. Our system effectively cuts through that noise. So we ground everything together fundamentally, and we test everything so that we can connect all of these different data types together in really, really tangible ways. Now, again, if we go back to the database model itself and understand how you can apply some of these concepts from a graph perspective, well, now we can start to do things that we weren't able to do. We can ground that model. We can effectively constrain that model, but we can hyper contextualize that model in a way where your returns from a system perspective are far in a way better than anything that you can.
[00:16:59] Speaker1: Get.
[00:16:59] Guest: Jon Brewton: From just traditional Rag approaches or even graph Rag approaches. And so we're doing something that's slightly different. And I think it really is that toggle between different data types being ingested all at once, combined all at once and then contextualized together. That gives you this level of fidelity. That's almost uncomparable in the market right now. So Graph Rag, yes, definitely a good concept to to apply to what you're doing in financial markets for sure. Go ahead with the follow.
[00:17:30] Host: Glenn Hopper: Yeah, I was going to say that, uh, to me, it's, it is trying to find those correlations between the unstructured data and the and the sort of, uh, word based and just how, how can I add value to historically, our reporting in finance has been limited to this is what's in the GL. Maybe we have some exogenous factors, some some external data that we're going to bring in that we can find correlations to. But if you can find customer trouble tickets that are related to a certain thing or just pulling information unstructured from your other systems, I think it's a way to really expand what we can do in Fpna with, uh, with our data being able to link it to other information sources.
[00:18:11] Guest: Jon Brewton: So on that point. So we've done a bit of financial modeling in the intelligence construct. And one of the things that I think is really interesting is whenever you tie financial data to other data, you know, that could be human reporting data. It can be public data, it could be 10-K, you can analyze 10-K. I know that you've done some work in this space to look at how you can do that, and how you can do it with a great deal of efficacy, sentiment analysis. Another thing you can look at, and it's being able to visualize and understand the intricate web of these connections and how they, in combination with one another, not at the same weight or in the same sort of factors, but together actually contextualize and drive market behavior. That's a really interesting thing to start looking at. I think the other thing that really that you get from just using knowledge graphs is the ability to uncover hidden connections, which can help from a pattern analysis perspective in fraud detection. And you can build these sort of network models, which give you an understanding of all the inputs, the outputs, the connections between these models and the strength of those connections.
[00:19:16] Guest: Jon Brewton: And if you can do that, you can model those connections in a way that is really tangible and significant. I mean, I can't undersell this at all. It that data model enables you to combine information that is just different. And by that combination you can really see like trends, patterns, you can analyze connections. You can really start to dive into proactive risk management, contingency planning and get a real comprehensive view of how market conditions influence one another. You can also do this in preventative maintenance. If you're looking at, say, like grid control, and you want to understand all the external factors that actually have an impact on how grids are managed and how energy is produced and how it's stored, and how you can disperse it. And what that means for like your total grid management, your production and everything else, the weather has an influence on this. So there's really no limit to the types of things that you can combine. But I just think that from a finance perspective, that transparency, well, the data model enables a level of transparency that's crucial for audits, regulatory reviews and making accurate and informed decisions, especially when you're talking about prediction.
[00:20:36] Host: Paul Barnhurst: Got it. Thank you for that answer. That was very helpful. I want to ask a question more around. And this gets back to, I'd say generative AI, but AI in general, right? Hallucinations, particularly in AI. And I see you smiling. I've been a big concern. I think a couple dictionaries last year, the word of the year was hallucinations. And somebody, somebody who didn't know anything about AI is like, what? Drugs? I'm like, no, no, no, that's not not what they mean. So obviously for finance, that's a huge concern, especially if you're using it for anything that goes public releasing 10-K, you know, releasing information to the street. So I thought maybe if you talk a little bit about how you're using some hallucination, hallucination resistant agents or you're using some of those with your platform and how you think, we'll see that technology applied in finance, you know, fraud, anomaly detection with large data sets.
[00:21:31] Guest: Jon Brewton: Yeah. I think just to ground the discussion, it's really important to understand how these things are actually produced. I think whenever we talk about hallucinations publicly, we talk about it as is it factual or is it not? And is it interjecting things that are made up or fabricated? The interesting thing about it is like how these models work, and I know that you guys are going to know this, but maybe the people that are listening don't quite understand this as well. Hallucinations are actually quantified in a couple of different ways. It's like how contextually relevant is the answer that was provided? What's the token probability that led to the answer that was created, and how much entropy is in the model? Uncertainty. Now you test these things on a variety of manners like you can effectively. There's a bunch of different question types. You can test things and say like, hey, here's a simple question. You know, what is the birth date of a person, you know, within your data model? And you can recursively run that question 100 times through the same model, and then look at the differentiation and the answer, and test the certainty or the efficacy of the returns of that. That gives you an entropy score and the overall token probability. So that's how hallucinations are really calculated. In general. People think about it as it fact based or not. And you have to really start with like, how are commercial models performing today? There's been a lot of great research in this lately, and one of the better papers which I can send you effectively dives into.
[00:23:02] Guest: Jon Brewton: Okay, let's look at every commercial model and let's look at how they're performing and the fact basis of their native performance. You know, given a certain number of factors. And one of these studies looked at. Sorry, excuse me, uh, one of these studies looked at okay, let's look at all the different models. Llama models opt GPT apo, GPT four zero. Uh, like all the anthropic models, Gemini models, everything. And let's just run some testing on it. The non-factual basis of all of those and native performance is about 73%. So what about if we just take an individual document and look at it okay. It's an individual document. We tested or sorry, this research paper tested 156 documents. Just doing traditional rag on it and looking at the answers that were provided. And only 14.7% of the returns had a factuality score better than 80%. So these models in native performance are actually pretty terrible. And they do interject a lot of nonsense. And the question is like, is the nonsense contextually relevant to the thing that I'm doing? Who knows. You have to really dive into the results in a one off basis to look at. But effectively we set up a testing protocol, and we're testing all of these models recursively in the background against the things that we're looking at and trying to ground them.
[00:24:27] Guest: Jon Brewton: We have a patent pending approach to how we're doing that, but it effectively sets up pollution hallucination resistant agents that can be deployed. Now, a lot of that is actually helped by the database model and the structure of it itself, because we're effectively paving a roadmap for how we can deploy these things, and really defining the connections between point A, point B, and the strength of these connections. And overall, that cuts down on your hallucinated basis. But for us, it is a thing that needs to be managed. We've been lucky because we were able to test things, and I did some LinkedIn post on this recently, regardless of the model that we're applying, whether it's llama, GPT four, Carlotta, opus three or sonnet, or the question type that we're asking, when we engage the system through our process, we can get 99.9 ish percent results. You never want to say 100% because, you know, these systems do work on a statistical basis still, but we do a good job in grounding them. So how does that apply to financial markets for me? I think if we're going to do stuff like fraud analysis, we really need to be able to continuously monitor data sets and deviations from expected behavior.
[00:25:42] Guest: Jon Brewton: And if these models are interjecting, false or inaccurate information will really waters down the value proposition for something like that. Financial irregularities can be time stamped, fingerprinted, and understood, and you can really trace back the leading factors for how you got from a point that preceded these things to the eventual outcome. And these systems are really good at understanding that stuff when given the right structure and the right grounding. And I think that's really important. The other thing that I think is just sort of looking at early detection for preventing financial losses, like how well can we predict market changes and the impacts they're going to have on the things that we're trying to do and being able to do that with a high level of confidence and knowing that the data and the conclusions are robust and verifiable. I see it as almost an imperative if we're going to be making predictive decisions on markets or what we believe markets are going to do. You know, we need to we need to have sound decision making, clear transparency, and we need to know that there aren't a ton of hallucinated responses and the things that we're looking at. The other big thing there, from an hallucinations perspective, regulatory compliance is obviously a piece of that puzzle. But integrating some of these solutions within secure frameworks and systems is, I believe, another area.
[00:27:05] Speaker1: Where.
[00:27:06] Guest: Jon Brewton: If we combine sort of the right structure of information, we can get to a point where we have a lot of confidence in these systems, and we have confidence in the financial decisions that we're going to make as a byproduct of them. Um, I just think enhanced accuracy is a big part of it, uh, especially when you're dealing with financials. Um, and I don't think you can get there without really ensuring that you have a low hallucinated base of returns. So I know it's a market problem. One is people don't understand it very well. But when we talk about factual basis, like in the work that you guys are doing and the work that the people that are going to be listening to this podcast are doing, I see it as a crucial imperative for scale.
[00:27:48] Host: Glenn Hopper: Yeah, great. Great points. And I it's funny, I feel like with my first question, as soon as we had you on, and it's because when we get someone with your background and what you're focused on, we like to dive deep. But we just jumped off the high dive straight into the deep end and got down into the weeds of this. And I think, you know, for a lot of our listeners, they are this is the place they're working in. And this is, you know, we're we're singing the song of their people. But for a lot of other people who are, you know, are fpna and looking to expand and understand this, you know, we did go right into the deep end. So I think I'd like to maybe bring this home at the end with, uh, now that we've thrown everybody in the deep end and made them swim on their own. If I'm running a finance department and I'm looking to integrate advanced analytics and AI gen AI or not, you know, if we've known that this is coming and we're not already doing, uh, work with advanced analytics and using our data and, um, incorporating AI into our financial processes. What advice would you have for someone in this position to start integrating and start moving in this direction, where they can be talking meaningfully about Graph Rag and all the other stuff that we've talked about in the show.
[00:29:04] Guest: Jon Brewton: Yeah. For me, um, it's a great question because I think it's a scalable, uh, question and answer, you know, whether you're dealing in finance or healthcare or engineering, you know, there's there's a certain level of efficacy that you're going to chase to make sure that you have trust in what you're doing. But I think it goes back to let's just really look at how we integrate advanced analytics and AI solutions in your business. And it starts with thinking about it strategically and understanding methodically how you do these things. For me, there's like several clear steps. And now I'm going to talk in a very general sense. So I hope I don't, uh, feel like I'm going to say some boilerplate stuff, but it's really important from a first principles perspective. Like you need to define your specific objectives and challenges clearly. I think there's not lip service, like if you're going to undertake deploying one of these solutions or building one of these solutions for deployment, whether it's improving risk management, enhancing fraud detection or optimizing investment strategies, having clear objectives will guide the AI integration, development and all the things that you need to do to make sure it aligns with your business goals. Um, I think traceability and explainability are a big part of that. These features are crucial for maintaining our understanding of these AI systems. You know, ultimately what they're being fed and what's coming out of them. But on a more fundamental level, how that stuff's connected and the value of those connections really gives you an inference in how you can do some predictive modeling around these things.
[00:30:35] Guest: Jon Brewton: Um, for me, I think that builds stakeholder trust and ensures that you're compliant with the things that you have to be compliant with. The other thing, and I know this is where people are going to say like, ah, it's just generic, but collaborate Collaborate with experts. You know, if I'm going to take on a finance project with somebody, I'm going to call you guys just to talk about it, because you have a different perspective than I do. Yeah. Perfect. No problem. Uh, whoever I need to get to first, uh, you know, I'm going to talk with experts because one of the key principles I've sort of lived my life by, and I think it was informed by the military and all the stuff I did in oil and gas is like, perspective cannot be given. It has to be earned and you can try to learn something. But if you just collaborate with experts, they have that perspective, and you can really leverage that perspective to ground your understanding of what you're trying to do, the opportunities that are in front of you, and some of the things that you really need to think about. I think leveraging strategic partnerships with people that know this stuff well is also important. If you have the ability to do it like we're leveraged up right now from a partnership perspective with Nvidia, Microsoft, AWS and Neo4j's. And that really covers like our CSPs, our compute capacity and our database model. But these people all look at this stuff from different lenses.
[00:31:55] Guest: Jon Brewton: And so for me, it's that whole one plus one equals three component of the system. I think you do need to focus on security, continuous learning and adaptation once you actually start to deploy these things. But for me, I think those are kind of the first principles I'd gather around. There's a lot of different things that you need to think about, including, you know, the efficiency gains that you're chasing and what it means for how you operate your system and your personnel. You really need to understand that stuff, because if you deploy a system that takes, you know, 75% of the workload off of the people that are doing the work today. Financial analyst is a is a really good person to think about in this context. But, you know, like, what are you doing with those people? How are you repurposing them? How are you actually leveraging their talents and their skills in different ways now that you have excess capacity? Ethical use is obviously a thing that you need to worry about whenever you establish some of these systems. So I think, you know, think about it from a first principles perspective. If we're going to actually build and develop a new program, we're going to develop a new solution and deploy it. And what do I need to think about first? What problem am I trying to solve? What value am I trying to create? Who do I need to talk to that's going to give me the best answers? And how does that inform how I develop and deploy the solution?
[00:33:07] Host: Glenn Hopper: That's great. Hey, Paul, in the interest of time, what do you think? If the editors here, I can lean into the next section, I could ask the first question and then dip out. You think they could edit around? Yeah. Uh, I'm sure they can.
[00:33:19] Host: Paul Barnhurst: Let's go into the personal questions at this point.
[00:33:22] Host: Glenn Hopper: Okay. Do you, uh, okay. So we can pick up at transition from that. Do you want me to pick up and transition into this and go to the first question, or do you want to go.
[00:33:31] Host: Paul Barnhurst: Uh, why don't we. So just so they know we're going to start here and I'll ask the first question. All right. So we're going to move from that. Thank you so much. That's helpful for the practical. We got out of the deep end for a minute and let some people uh, kind of wade in the pool. Now we're going to get to our personal section. So how this works is each week we come up with a different list. We've used different tools to come up with a list of 25 questions this week. It was done by Claude Sonnet. So we use them to generate the questions. I have 25. I'm going to give you two options one. You can pick a number between 1 and 25. And that's the question I asked you. Or two, I can use the random number generator and see what it comes up with.
[00:34:13] Guest: Jon Brewton: I like random numbers. Let's go.
[00:34:16] Host: Paul Barnhurst: All right. Give me one second here and we'll see what it gives us. Number 18. So let's go to number 18. Number 18. What's the most interesting or unusual job you have had before your current job? Before what you're currently doing?
[00:34:36] Speaker1: Mm. Um.
[00:34:40] Guest: Jon Brewton: I'd say being an honor guardsman in the United States military was probably the most interesting.
[00:34:46] Speaker1: And.
[00:34:47] Guest: Jon Brewton: Gratifying job I've ever had. I wouldn't say that. It was particularly, um, comforting on a day to day basis. I think I presided over 47 military funerals. Um, whenever I did that and, you know, those brothers in arms. But that was definitely the most interesting and rewarding job that, uh, that I ever have.
[00:35:10] Host: Paul Barnhurst: Got it. I can see where that could be very rewarding, but also very emotional. Presiding over some of the things that you did obviously can be difficult. So thank you for sharing that one. Over to you, Glenn.
[00:35:24] Speaker1: All right.
[00:35:24] Host: Glenn Hopper: Let's go. I'm a I'm going to let, um, Claude pick from these. And, uh, we're just going to completely turn it over to the AI here. So, um.
[00:35:33] Host: Paul Barnhurst: He's he's more daring than I am. I'm not quite ready to give it all I.
[00:35:38] Host: Glenn Hopper: Oh, this is, uh. So. This is a great one. Um. It's actually. I feel like it may this may be actually a good follow up to, uh, to what your, uh, uh, most rewarding job was. Um, what's a charity or cause that you're passionate about?
[00:35:55] Speaker1: Yeah.
[00:35:56] Guest: Jon Brewton: Um, we, um, as a family really support the Ronald McDonald House and have for years. Um, both of my kids, when they were young, uh, had medical issues and they were in the hospital. Um, serious medical issues and that stuff. That organization is just like, amazing. Uh, they do a lot to help families. And anything we can do, uh, to help them really, really meaningful.
[00:36:29] Speaker1: So that's great.
[00:36:31] Guest: Jon Brewton: Yeah. No, you nailed it.
[00:36:33] Host: Paul Barnhurst: Yeah, I can understand being unemotional on that one I used to do in grad school. We'd go every month and prepare dinner for Ronald McDonald, and I loved doing that. Yeah, at the end, we'd be like, it was great seeing you. I hope not to see you next month. Like, I like it, but we want your kids to be home with you. So I could imagine that's an emotional. Emotional. Yeah.
[00:36:54] Speaker1: For sure.
[00:36:55] Guest: Jon Brewton: Uh, but amazing organization. Like they do incredible work for families. Um, really, uh, that's a big part of at least our experience. And anything we can do to help them. Uh, we're for it.
[00:37:08] Host: Glenn Hopper: That's great. Well, John, we really, really appreciate. I know it's we went so deep, so fast on this, but it's just. I love having you on and your insights and everything you were able to provide. And then, you know, learning a little bit more about you and, and what you're doing at Data Squared and on the personal side, and just thank you so much for coming on the show.
[00:37:28] Guest: Jon Brewton: Yeah. Thank you. Really appreciate the opportunity, guys. Have a great day.
[00:37:37] Host: Paul Barnhurst: Alrighty. Well and the recording there. Thank you. Appreciate it John.
[00:37:40] Guest: Jon Brewton: Yeah. No worries.
today's data driven financial landscape. John, welcome to the show. Glenn, thank you so much. I appreciate the opportunity. I look forward to getting.
[00:01:48] Guest: Jon Brewton: Into a discussion today about how the things that we're doing apply to the sectors of finance and a couple of other interesting places as well that we're playing in at the moment.
[00:01:57] Host: Glenn Hopper: Yeah, that's great. And one of the things I love about this show is I get to talk to, uh, people who are even dorkier. Is that the right word? Nerdier? Geekier I don't know than I am. But looking at your your, uh, your background, I think that's a probably a moniker that you, uh, carry with pride, so no doubt about it. Yeah. Um, I guess.
[00:02:17] Host: Paul Barnhurst: The three of us do. That's what. Yeah, yeah.
[00:02:20] Yeah, yeah.
[00:02:21] Host: Glenn Hopper: That's nerd fest. Yeah. So, John, I guess to get started, uh, tell me a little bit about Data Squared. Um, what you guys are doing and sort of your journey, uh, your path to founding this company.
[00:02:34] Guest: Jon Brewton: Yeah. Thanks. Um, look, I think Data Squared story is, uh, is interesting. I find it compelling and interesting because, you know, well, we're we're a part of it, but in general, look, we're a service, disabled, veteran owned small business in the government sector specializing in gen AI product development and deployment. Um, additionally, we have some interface in the commercial markets really paying attention to what's happening in the energy space. Um, our company started in, let's say, early 2023. Um, but really didn't get, uh, kicked off in the way that we're positioned now. And the things that we're working on now from a product development perspective until August of 2023. So we're really young. Uh, we're working in a couple of different places. Uh, we have active work going on in the United States government space, and we also have active work going on in the energy sector, uh, specifically