Generative AI in Auditing for Accountants to Master Compliance and Efficiency with Jason Pikoos

In this episode of the Future Finance Show, hosts Paul Barnhurst and Glenn Hopper check out  the transformative potential of AI and its implications for businesses, especially in finance and compliance. With guest Jason Pikoos, the discussion unpacks the opportunities and risks associated with adopting generative AI. Jason shares his experience developing frameworks that guide companies in responsibly leveraging AI while maintaining agility and compliance.

Jason Pikoos is a seasoned expert in accounting and operational transformation and has over 18 years of experience working with high-growth and tech-driven companies. Currently at The Connor Group, Jason has supported over 40 businesses in IPO preparations and implementing transformative governance systems. He has co-authored a framework addressing the challenges and risks of AI adoption. Jason’s extensive background includes roles at McAfee, KPMG, and collaborations with industry giants like Google and Sony.

In this episode, you will discover:

  • Why AI governance can accelerate adoption instead of slowing it down.

  • The risks and opportunities in using generative AI for finance and accounting.

  • How companies can prepare for audits and compliance in the AI era.

  • The role of AI in revolutionizing business processes through advanced automation.

  • Practical advice for businesses to integrate AI thoughtfully without jeopardizing compliance.

The intersection of AI, governance, and business transformation is both exciting and complex. By embracing thoughtful governance, companies can not only mitigate risks but also unlock AI’s full potential to transform finance and operations. The key lies in understanding the balance between control and innovation, ensuring AI adds measurable value to your organization.

Follow Jason:
LinkedIn - https://www.linkedin.com/in/jason-pikoos/
Connor Groups - https://p.connorgp.com/l/
Governance Framework - https://uploads-ssl.webflow.com/

Join hosts Glenn and Paul as they unravel the complexities of AI in finance:

Follow Glenn:
LinkedIn: https://www.linkedin.com/in/gbhopperiii

Follow Paul:
LinkedIn: https://www.linkedin.com/in/thefpandaguy

Follow QFlow.AI:
Website - https://bit.ly/4fYK9vY

Future Finance is sponsored by QFlow.ai, the strategic finance platform solving the toughest part of planning and analysis: B2B revenue. Align sales, marketing, and finance, speed up decision-making, and lock in accountability with QFlow.ai.

Stay tuned for a deeper understanding of how AI is shaping the future of finance and what it means for businesses and individuals alike.

In Today’s Episode:

[01:31] Introduction and Opening Thoughts
[02:37] Meet Jason Pikoos

[04:09] Why AI Governance Matters
[06:10] Driving Innovation with Governance

[09:30] Challenges of Generative AI in Public Companies
[14:19] Reperformance and Ongoing Validation
[17:59] AI Adoption in Finance
[25:02] The Future of Auditing with AI
[32:17] What’s Next in Advanced Automation
[41:46] Fun Insights and Closing Thoughts



Full Show Transcript

[00:01:31] Host 1: Paul Barnhurst: We've been doing future finance now for about seven months or so. We're going to try something a little new. So I just want to let you know before we jump into our interview with our guest, Jason Pikoos, today, that we're not going to have our. Is your job safe for now or monologue? We want to try to mix it up and just do an interview. So today's interview, we're really excited. It's going to be about some compliance, some Science regulations and frameworks to help you with AI because it's something every business is trying to figure out, and there's real risk if you don't do it right. There is risk in using AI and getting a wrong answer. We had Jason come on and share some of his thoughts from the Connor group, and we're going to continue for the next little while trying an interview. But we'd love to know if there's something you want us to do. Please reach out and contact Glenn and I. We mean that sincerely. Our goal is to make this the best show in the world for AI and technology, for finance professionals. So help us do that by giving us your feedback. I hope you enjoy the show. Thank you.


[00:02:37] Host 1: Paul Barnhurst: Welcome to another episode of the Future Finance Show. This week we're thrilled to be joined by Jason. Jason, welcome to the show.


[00:02:47] Guest: Jason Pikoos: Great to be here. Thanks.


[00:02:48] Host 1: Paul Barnhurst: Yeah, excited to have you. So just a little bit about him Jason works for the Connor Group. He has 18 years of accounting and operational experience working with high growth and technology companies. He has helped lead and guide over 40 companies preparing for being public planning and managing their IPO and many other things that come with that. He is an expert in helping clients drive business transformation through improved processes, systems, controls and organizational changes. He recently worked on governance for AI that we're going to talk about today. He was one of the authors for a framework that was issued by the Connor Group. He was formerly a director in project management office at McAfee and a director at KPMG and its advisory practice. He's also served as an auditor, and he's worked with many different clients from Google, Johnson and Johnson, Sony and others. He also graduated from the University of Cape Town. And today he lives in the Bay area, so we're thrilled to have Jason here with us.


[00:03:50] Guest: Jason Pikoos: Great. Thank you. Sounds like a really smart person. You should get him on this show.


[00:03:54] Host 1: Paul Barnhurst: We'll see what we can do. That's how I feel when somebody reads my introduction. Like, who are you talking about?


[00:03:59] Host 2: Glenn Hopper: That's reason enough to go on a podcast, right? Just to have somebody say nice things about you. Just.


[00:04:04] Host 1: Paul Barnhurst: That's why. That's why I do so many podcasts. People say nice things about me. Help my self-esteem. Always start here because, you know, you talk a lot in your framework around AI about governance. I think when most people hear the term governance, I really got to deal with a bunch of bureaucracy, right? You know, it's often synonymous with compliance, which can often be seen as slowing things down. Like, okay, how can we avoid the auditor? Who's ever thought that?


[00:04:31] Host 2: Glenn Hopper: Don't say it in front of the auditors that I do that, you know.


[00:04:35] Host 1: Paul Barnhurst: Oh, sorry. If there's an auditor out there. We love you. Don't be offended. So, getting back to the question, how do you see AI governance impacting the ability for companies to be flexible, to move fast, to adopt things quickly? Like how do they manage that? They hear the term governance and you think okay, it's going to slow things down.


[00:04:54] Guest: Jason Pikoos: Yeah. It's actually a great question. I think maybe let me start with the contrast of that, which is really interesting. It's what we've actually seen. Often slowing it down is the lack of governance altogether. You know, AI is a very new technology. There's a lot of news and stories about it and concerns about it. And some companies that have done nothing in terms of adopting AI governance thoughtfully. What we've actually seen happen is these broad brush approaches to, hey, we are therefore doing nothing because we don't understand it. We haven't invested any time in it because we don't know how to control it. We are going to do nothing. So the irony is, sometimes having zero governance whatsoever actually might mean you do nothing whatsoever. On the flip side of that, where do we see governance? Yes, there is a component of governance which does require some process and some structure, which in all honesty is a good thing. It's not necessarily about stopping things, but it may force us to at least slow down and consider some of the things. But where I see it really helping drive kind of rapid adoption, one is making sure that generative AI is aligned with strategy and goals. When you have a line of strategy and goals, it actually creates an organizational kind of alignment behind something you're trying to do, and therefore you typically both move faster and might even have the resources to do so more effectively and more innovative.


[00:06:09] Guest: Jason Pikoos: Fashion. The governance frameworks also encourage people to drive and measure value. A lot of people are adopting AI for the sake of adopting AI because so that we can say we did it and we can give ourselves all these lovely high fives and pats on the back. The governance framework says you have a goal. You have an objective. Measure the results, make sure that you're getting value. And as you get value from stuff, as you can show value, you can move much more quickly. Once again, getting the resources, getting the engagement from the team. The other thing is where it helps a lot and can really drive. Innovation is coordination. When you have every function doing their own thing in an organization in an entirely uncoordinated fashion, when you've got resource distribution too, you've got total lack of coordination. Frankly, you have a technology being adopted that can be used by multiple teams. When you do things in a more coordinated fashion, you're pooling your resources, you're pooling your brainpower. You actually bring in that innovation juices together. And oftentimes one function, such as marketing, may have done something that actually could be quite relevant to finance and accounting. So just those few things, such as the alignment with strategy, making sure we actually have value coming from us and doing such things in a coordinated fashion can really accelerate adoption, rather than necessarily get in the way of it.


[00:07:19] Host 2: Glenn Hopper: Yeah, honestly, we have our scripted questions. And I just as you were talking, I just came up with like 19 different questions I wanted to ask, but we can go.


[00:07:28] Host 1: Paul Barnhurst: On and go with one.


[00:07:29] Host 2: Glenn Hopper: So. Well, I guess before I bring up mine. Did you, did you have a follow up you wanted to do? Paul, before I jump into the next one.


[00:07:35] Host 1: Paul Barnhurst: Follow up, I was gonna say two things that kind of, I think really stuck with me when you said that is first you mentioned adopting AI for the sake of AI. How many of us have seen tools that they've had something out there for ten years that now it's AI right all over the label and you're just like, come on guys. Like, everything doesn't have to have to say I, you know, that's the first one that kind of came to mind. And the second one, you said everybody does their own thing. If you don't have that governance at a corporate level, how many of us seen that when SAS came about? Why do we have 15 versions of, you know, zoom, webinar platform or whatever it might be because nobody's coordinated and we've all put it on my credit card and it's $50 a month. And I was like, start digging into it and wait, we're spending half a million for basically the same thing when we could get it for 50,000 for the whole company. So I think there's some really good points of sometimes just having that company governance allows you to do and recognize things that you wouldn't otherwise. All right, Glenn, you got not 19 questions, but I'll let you go.


[00:08:35] Host 2: Glenn Hopper: Yeah. So I mean, I guess this is interesting to me because in my day job, when I'm not yammering about this stuff, I'm helping companies, mostly private companies, but a couple of public companies implement or come up with AI usage policies and have a controlled way to implement. And, you know, everybody's talking about generative AI. But even like classical AI, machine learning, you know, helping them do that for data and analytics. But I think because the barrier to entry for classical AI was you had to be a coder, you had to write Python and all that. And now, though with generative AI, anyone who can type on a computer can use it or has access to it. But there's a lot of danger if you don't have a policy and a framework and governance in place. And I know there have been governance policies around it and controls around it for for years. But I guess the simplest way to ask the question is why do we need different governance and it controls for Gen AI? And then we'll just make one follow up question to that is what do you advise people as their and let's focus on public companies as they're setting up their gen AI usage policy. What do they need to keep in mind with their controls and the framework and everything?


[00:09:56] Guest: Jason Pikoos: It's a great, great question, you know, and it's something that comes up when we develop this governance framework. First and foremost, the idea wasn't look at this thing as an independent standalone framework, but as a supplement to existing frameworks. So yes, IT control frameworks have been around for some time. Complex systems are not new. It has been around for decades and decades. So this should be a supplement. And the reason, on the other hand, why we built all this together. Because as we started learning more and more about AI, it became very apparent that there are additional risks associated with this technology that are just not effectively addressed with current frameworks. The one other thing is this thing actually natively and naturally changes and evolves in some respects on its own accord. Most technology that you buy today, traditional technology, it's static. I buy it, I implement it, and it stays exactly the way I built it until I make a deliberate change. And at that point I will test it again. So you have these very clear points, but the nature of these models, they keep changing, they keep evolving, they keep training them. Sometimes if you're using a third party application that's using an AI model, they may change the models. You may not even know what model they're using. Once it's anthropic, then it's ChatGPT 4.0, then it's O1.


[00:11:06] Guest: Jason Pikoos: Then maybe soon it'll be O3. So you've got this regular evolution, which is something that is not exist in traditional technology. Traditional technology also doesn't theoretically make mistakes. Yes, sometimes it produces errors, but those are typically because of one of two things. Either you set it up incorrectly or there's some sort of bug in the system, but otherwise it will operate exactly the same way all the time. And AI not only makes mistakes, but it's inconsistent. So it's a really important component, which, well, now you have a technology, especially in the finance and accounting space, which may produce either irrelevant, incorrect or partially correct responses. And that's not something that we typically use today. There are new types of risks that we typically haven't tested. When was the last time you deployed an ERP system and ask yourself, oh, what's the bias and bias and ethical considerations embedded in that system? Or it is just not something that we've done. So we need to be asking ourselves these questions. And they challenge our minds because it's not something especially as finance and accounting people we think about, we think about dollars and cents and analytics and whatever it might be. Ethics and bias, although we probably care about those things, are not something we think about in our technology.


[00:12:14] Guest: Jason Pikoos: It has access to huge amounts of data. Oftentimes, you have no idea what data it's referencing. And that could be a concern in terms of is it making mistakes because of that data. Can I even use that data. A lot of people are concerned that they're using information that doesn't belong to them, or that they're going to find themselves in legal jeopardy because they've just used an AI model. And then the last thing is, what does it do with your data? It's a really, really interesting question where traditional systems that you put the data in there, it's either in the cloud but it's dedicated to you or it's on your on premise system, but you control it. But here a lot of data goes to the model and then what happens to it? Right. And it's not just my data directly for systems that I use. Where we think about this a lot is, hey, I'm using a consultant. That consultant arrives to the meeting and is using Google Meet. That Google Meet is switching on Gemini, who's listening to my conversation and saving all my information. Where is my data? Just gone. Right. Who has this information? Right. So those are some of these new considerations that we just never had to think about before. Sorry. What was the second question there, Glenn?


[00:13:18] Host 2: Glenn Hopper: Well, thinking about the difficulties that. Well, that's a whole other question. So let me follow up this way. I guess I'm thinking about you. And with your background as an auditor, I love thinking about, you know, whether it's SOC2 compliance or going in for audit or whatever. You can't tell an auditor, you know, it's the kind of the explainability. And I'd love to see the business. Try to explain to the auditor, well, I plugged the data into this magic box and it gave and it gave the response here with yeah, like, I mean, that's gotta be a problem that I mean, especially with generative AI which is non-deterministic, which like you said, it's going to have its own set of, of challenges when it's one time we'll give you one answer and then you could ask the exact same question, the exact same prompt, and it gives you a different answer because it's generating the responses new. And that's very hard for finance and accounting people where numbers are pretty black and white. There's not a lot of gray area between a credit and a debit. So I guess how are you advising companies to be able to use generative AI and capitalize on it? But being aware and being audit ready and having this explainability with how they're using it, it's actually.


[00:14:31] Guest: Jason Pikoos: A really excellent question, because what's interesting right now is there's zero guidance out there. The big four firms have not given guidance on how to put effective controls, whether it's Sox compliance or just traditional, hey, I need to do these controls for the purposes of getting through an audit. There's nothing had been published by the PCAOB. I expect stuff will get published. Maybe we'll start seeing more happening in 2025. But it is a real challenge, right? Because you get down to the question. It's as simple as, how do you know? That's always the one that people struggle with. It's because it is a true black box. This is oftentimes people refer to old technologies as our black box as a revenue recognition system. That's actually not. The reality is you can follow the configuration, you can follow it. It's difficult and sometimes painful, but it's actually not a black box. AI is you give it info. Something magical is occurring. You get this output that oftentimes magical. Hopefully it's also correct. So what we are recommending right now and what we see is this, this that you've probably used the term human in the loop, all that type of stuff. But the reality is what you probably need to be thinking about. And I think it's the only meaningful method to say that we can rely on the system is a form of account audit.


[00:15:41] Guest: Jason Pikoos: Reperformance. Right. So what you would have to do is and it's not as simple as people say. Well, I'm just gonna check a few things. It's one you need to understand, especially if you're buying a third party. Call it a built for accounting technology that embeds AI into it. It's not 100% AI. Some parts are AI, some parts are traditional technology. You need to understand what, first of all, what part of the system are you actually relying on AI for? If you're not relying on AI, it's just workflow. You can check for configuration. Where am I relying on AI? How am I relying on it? Is that ultimately financial? And then you have to ask yourself, hey, I need to do a degree of reperformance. I don't think you need to reperform 100%. I think there's a real logical explanation to say, hey, we're going to do some statistical methodology. Hey, maybe we check a 510% of the population. But not only do you need to do a performance, it also needs to be very well documented. I think, you know, sometimes when I review. Hey, Glenn, you did a some sort of reconciliation for me. You sent it over. You're a human.


[00:16:43] Guest: Jason Pikoos: I'll probably do some sort of review. And the auditors oftentimes give us some sort of leeway around it because they can, by the way, always go speak to Glenn and ask what you did with. I need to very clearly say, this is my reliance. These are the steps I reperformed to get comfortable that that output is correct. And if I find no errors by testing 5% of the population, I can then infer that it's working correctly, but that reperformance probably also has to occur on a regular basis. It cannot be something that I do at the beginning of the year, at the end of the year, because this technology is evolving regularly and can make mistakes at any point in time. I need to regularly do this form of reperformance. Hey, January, February, March. I'm regularly validating. It's effectively a form of ongoing validation. I think that's what companies are going to have to do until such time as we actually have some sort of framework where you can test these systems to say against this framework, which is accepted by the audit firms, the system works correctly. And I think we're a couple of years away from having a framework like that. You're going to have to do some form of reperformance. Does that help? Hopefully that also.


[00:17:45] Host 2: Glenn Hopper: Yes, yes. And I'm going to be quiet now and I'm going to let Paul go, even though I now have 19 more questions.


[00:17:50] Host 1: Paul Barnhurst: But I'm going to ask kind of a little bit of a follow up here. And we'll see where we get on our actual script of questions, which is fine. But you know, as you talk about Reperformance, you mentioned kind of all that additional work that needs to be done. I mean, it's work you have to do for other audit stuff, just in a different way. Is that scaring away, you know, people in accounting and finance of using AI? Is that concern of getting the auditors comfortable? Is that something that comes up a lot for you? I mean, where are companies at on that side? Because that feels like especially with public companies. But, you know, any company is big enough that has to do audit performance. That's a huge risk, right? Nobody wants a material deficiency in their financial statements that never a two words no company wants to hear. And so kind of, you know, what are you hearing from companies around that?


[00:18:41] Guest: Jason Pikoos: Absolutely. If you're a public company this is the internal audit. Leads are stepping in really hard and putting a hold. Don't do anything until we know exactly how we're going to get through this. So for public companies, it's absolutely the case for private companies less so. I mean it is a concern. You know, we work with a lot of companies that go through the IPO process. So that is where it's popping up. We had one client recently. They were planning on rolling out some sort of AI in the last month of the year, and about to do their first public company audit. And we're like, you are crazy. You don't want to do that because it's going to force the auditors to understand how this technology works. So push out, continue with your implementation, but make your go live the first day of next year. So make this AI next year's problem, not this year's problem. Right? So it absolutely is. Because in all honesty, it's a very complex technology. It's funny, we have some public companies that we know are using AI. My sense is the reason why they haven't implemented all these kind of structured controls is not because of the fact that the auditors get comfort another way. It's because of a lack of understanding. So the auditors haven't asked the right questions. And as a result, they're letting things through because they don't actually understand that they're exposing themselves to risk.


[00:19:56] Host 1: Paul Barnhurst: The auditors are trying to understand it as well. None of the big four have come out with their guidance on all this. And so it's a little bit wild, wild West. If you don't have somebody telling you to pump the brakes.


[00:20:09] Guest: Jason Pikoos: Correct. And that's why I think the internal audit teams are very concerned, because this could be something that could result in material weakness, as you noted, frankly, a mistake. I mean, that's even worse, but also that you get to the point where you try to get through a quarterly or annual audit and you suddenly find yourself not able to get through this because the auditors now want to say, sorry, we can't rely on any of this. We have to redo a whole bunch of stuff. How do you know? So that's the stuff where I think what we're encouraging people to do is be very careful and very thoughtful. Engage your auditors early. But what we also suggest is get your internal ducks in order. Make sure you understand where you're relying on AI. What is the AI doing for you? And then come with a proposal. If this is what we do to get comfortable. And then validate it. I think if you walk in with the auditors, hey, we got this new AI system. What we should do then you're just going to find yourself in the merry go round of not getting to a solution. So it's really important to get your ducks in a row before you engage with the auditors because they, to be honest at this stage, don't really probably have good answers. So it's important that you come with a proposal of how you will address it.


[00:21:08] Host 1: Paul Barnhurst: I'm going to ask one more follow up question here, and then I'm going to let you go to your question, Glenn, how long should a company plan on taking to kind of get comfortable with this, given kind of the wild, wild West you mentioned? Some public companies are saying, look, we're going to hold anything related to accounting that's going to be audited. Let's just wait on AI. But, you know, what are you telling companies like, we're going to push forward. We really want to implement something. We think it's helpful. Now, how long are you telling them it's going to take to, you know, be able to have all that so they're ready for an auditor. What's the kind of, you know, lift here.


[00:21:41] Guest: Jason Pikoos: Yeah. And, you know, I'm all for using AI. I think it's just a matter of. You need to be very thoughtful about how you do it, especially as a public company, especially if it's used in an area where you can expose yourself to financial, public financial reporting.


[00:21:52] Host 1: Paul Barnhurst: Yeah, I agree, there's lots of places you can use it. There's some areas you just have to be a little extra careful. Right.


[00:21:58] Guest: Jason Pikoos: So my suggestion is, is when we're putting the implementation point aside, you probably want to run this thing for 2 or 3 months, get some degree of comfort. One is you probably should never deploy a major piece of technology in the last quarter of your fiscal year anyway. It's most public companies just don't do that. It's just you just, you know, basically asking for a material weakness as you're racing towards the finish. But in general, give yourself 2 to 3 months so that you understand it, because I think that's also one of these things that people don't fully appreciate about AI. It's kind of this call it the fact that can be a little inconsistent. You haven't really seen all the scenarios until it's run for a window of time. Now, most of the AI technologies that people are rolling out, other than kind of the chatbots and there's various technologies that technologies that people are using tend to be transactional in nature. So you'll probably see a lot of the variation within 2 or 3 months of it. So give yourself 2 or 3 months, maybe continue with your existing processes. Make sure that you've really got the kinks out. Make sure that you've figured out the exact control methodology so that you have that nailed down. And then feel free to kind of flip over. I think any quicker than that, you're probably at least at this stage, you're probably exposing yourself significantly.


[00:23:06] Host 1: Paul Barnhurst: Ever feel like your go to market teams and finance speak different languages? This misalignment is a breeding ground for failure in pairing the predictive power of forecasts and delaying decisions that drive efficient growth. It's not for lack of trying, but getting all the data in one place doesn't mean you've gotten everyone on the same page. Meet qflow.ai, the strategic finance platform purpose built to solve the toughest part of planning and Analysis. B2B revenue Qflow quickly integrates key data from your go to market stack and accounting platform, then handles all the data prep and normalization under the hood. It automatically assembles your go to market stats, make segmented scenario planning a breeze, and closes the planning loop. Create airtight alignment, improve decision latency, and ensure accountability across the teams.


[00:24:14] Host 1: Paul Barnhurst: Yeah, and I'll let Glenn go in. I want you to tell one story. You kind of laughed and said, don't implement technology in Q4. Worked for a company where they decided to make some changes to one of the software in Q4, and we ended up booking a bunch of entries that shouldn't have been booked, and they didn't get corrected before the end of the year. Well, those entries were 90% of the threshold for having to restate the Eight. The entire quarter's public earnings.


[00:24:40] Guest: Jason Pikoos: Oh that's great.


[00:24:41] Host 1: Paul Barnhurst: Fortunately, when we combine everything else, I think they ended up being 1% short of that threshold number. But let's just say my boss, it was not fun for him. We'll leave it at that.


[00:24:51] Guest: Jason Pikoos: It is not fun. People get fired for those type of things, unfortunately. And it's typically the head of the CFO is probably often not the first head to roll when there's a restatement.


[00:24:59] Host 1: Paul Barnhurst: Yeah, never a good sign.


[00:25:01] Host 2: Glenn Hopper: No. So I'm really hung up on the auditor thing, and I part of it is. So I've been a lecturer in right about and travel around the country talking about, you know, gen AI and finance and accounting and early on, for some reason, auditors were the angriest. I just start talking about generative AI, and they're just mad at me just for talking about it. And I've had several of them tell me it could never do my job. And I think audit the most rule based job I can think of. If you think a software system that were significant enough couldn't handle the rule based stuff that you have to have to memorize and, you know, couldn't memorize PCAOB and GAAP and all the accounting rules. I mean, that's a very short sighted way to think of it. But there, you know, auditors are, you know, they're a special breed.


[00:25:51] Host 1: Paul Barnhurst: Be careful what you say, Glenn. We have an auditor here. We're covering. All right, then, go ahead.


[00:25:58] Host 2: Glenn Hopper: So I guess. Because, you know, you have that audit background, but you are obviously very knowledgeable about Gen I and all that. I'd love to get your opinion. And I'm thinking of everything from, you know, internal controls testing to maybe risk scoring any other kind of substantive testing or, or inventory in particular. There's probably some, some great things. Where do you see some areas where and this is maybe a little off topic, but where auditors could actually, if it's not there today, where it will, will be in the near future, where the auditor's job could be made easier by using generative.


[00:26:35] Guest: Jason Pikoos: It's going to be we're going to see an enormous transformation and revolution in the order. And I say revolution, not evolution. I think as soon as the PCL gets PCL, gets their heads around how AI can work and how auditors can provide the same level of comfort with AI, we'll see a rapid adoption. Every one of the Big four is investing in this space. We are actually aware of technologies that are available and being built to actually perform audit procedures already. So this is not, you know, it's not widely used, but it's being deployed. And some of these technologies are already being acquired by kind of your, your more of your smaller audit firms where there's obviously less risk. But documentation, there's some great tools out there where you can just sit down and record yourself on a video, do screenshots and everything. It'll translate that into a narrative or a, you know, eventually we'll have flow charts or business requirements documents. So documentation is going to be one which we do a lot in audits, right? Process documentation, narratives, Those policies, whatever it is that you need to do, tests of details which are very similar to test of controls, absolutely are going to be something, especially when there's a relatively high volume of these things and they're not you know, sometimes you have entity level type of controls which are discussion based.


[00:27:50] Guest: Jason Pikoos: I'm talking about the transaction ones absolutely will be tested, but I think we will see also an evolution in how we test these things. If you have AI that's able to plug into systems and monitor them on a real time basis, why do audits have to be periodic, whether it's an internal audit or an external audit? I think you will get to a point where things can be monitored real time. You submit an expense report, it immediately goes and the AI reads all your receipts, checks and double checks, then validates that it's appropriate and then immediately escalates things, but also sends emails out, sends follow ups. All of those type of things I think will be occurring real time. And then I would expect that maybe the auditors will have systems that effectively plug into those and allow them to be monitoring real time. So I think we're going to have an enormous amount of transformation in this space. It is absolutely ripe for AI. The biggest hurdle in this space is going to be how quickly the audit firms can start righting the ship, and how quickly the PCAOB gets on board to provide guidance, because without that, none of the audit firms will stick their heads out.


[00:28:55] Guest: Jason Pikoos: So this is an exciting area. I think any auditor that thinks that this is AI is not going to change. Their job is dreaming. In all honesty, and I think that's why we've moved forward very quickly with AI because we're cognizant this this market, this is happening and it's going to happen very quickly. You either have the option, you either get ahead of it and you are part of this. Frankly, you have the opportunity to even define the direction or it gets done to you. And I don't want to be on the get done to me side of it. I mean, even we're even evaluating methodologies to deal with complex technical accounting questions. I can help with that. Maybe it doesn't do the final review, but it absolutely can help with research, evaluations, even some degree of recommendations of how to account for complex transactions. These are the kind of amazing things I can do. If that's not going to change how auditors and accountants work, then to be honest, I'll be shocked.


[00:29:45] Host 1: Paul Barnhurst: I'm going to have to leave. He just used the word technical accounting.


[00:29:48] Host 2: Glenn Hopper: I'm so on that note, I will say one of the most successful bots that we've built and are using internally in my day job is a technical accounting memo creator. So we took our companies, you know, maybe about 75 technical accounting memos that we'd written every, every and just think of any, you know, ASC 606 or whatever, you know, anything you could think of, put them in and said, this is the way we do technical accounting memos. So we did some fine tuning on that. And we're using retrieval augmented generation to be able to access these particular models. And then you know, we kind of just set up this is the intro, this is the problem statement or whatever. But it's you know, and like, like you said, has to be human in the loop. But when you're using Rag and when you're using fine tuning, when you're doing it in closed environment, you can control a little bit and minimize those hallucinations, and you turn the temperature down on them so that they're not, you know, so but we're I mean, that is an area where there's great efficiency is when you can knock out a two page technical accounting memo by just, you know, and we have so our employees can access it either through ChatGPT or through teams. So that teams interface is great because you're just talking to them like you would just a coworker and stuff that they, you know, you go into accounting because you like numbers and you don't really like to write. So the fact that you've got this generative AI tool that will do the writing for you and you just check the numbers and everything, it's a great application and one of the easiest ones that we have rolled out so far.


[00:31:20] Host 1: Paul Barnhurst: Yeah, that's a use case. I think I hear a lot is helping to write documents. Right. Because it's great at that. You give it 50 examples of what it needs to write for something, and it's going to be able to come up with something pretty good. Pretty good. Yeah. You still got to tweak it. Like you said. Human in the loop. Don't just submit that to be approved. Check it. But it's generally going to be a lot better than you if you ask it. A bunch of complex math problems, that's for sure. All right. So changing subjects a little bit. We've talked a lot about governance about auditors and recovering auditors. You know we've covered that gamut. What I want to ask you is what are you most excited about is you see all this technical capability, everything coming I use and automation. What gets you excited in the morning? You're like, wow, I can't wait for that.


[00:32:10] Guest: Jason Pikoos: My first cup after my first cup of coffee.


[00:32:12] Host 1: Paul Barnhurst: Yes, my cup of coffee. First cup, first cup first to that.


[00:32:17] Guest: Jason Pikoos: You know, what really excites me is actually a little bit less about AI or generative AI as a standalone capability. I think what it's exposed and what it allows us to do is so much more when we take the broader view of. So when you combine generative AI with the broader a stack of technology. We're actually using the term advanced automation. I've heard other people use various other terms, but really it's this concept of and I do worry. That's why I worry. Some people are so laser focused on AI, AI, AI. The value from I will not be standalone. It's when you combine it. I have data and data platforms and data analytical tools. I have integration and automation platforms. I have my core business applications. You bring AI into that combination, pull them all together. Now you're able to do things in ways that you have never been able to do before. Because in many cases, why we struggle with automation was there was these human intervention points that we had to have because of the fact that there was maybe some sort of judgment that needs to occur, or a document or a manual review. And now you have this AI that can act in those type of ways.


[00:33:20] Guest: Jason Pikoos: I'm not suggesting we remove humans from the process altogether, but if somebody needs to review a contract or compare a contract to an order so that it can proceed, these are the kind of things that I can do. But the real, real magic will be orchestrating these things together. People are talking about agentic flows, right? Using AI agents. Couple them with workflows you now can automate in ways that were just historically not possible. Imagine putting a chatbot type of capability sitting on top of a very robust data set. So we expect that there'll be an enormous investment in data in the coming years. Both data quality, data availability, putting it more in a centralized, organized fashion, not just dumping data. I think right now people have snowflake and they put a zillion duplicative data tables in it. But an organized data warehouse. Imagine a trained chatbot that can sit on top of that and see your entire organization. And I don't have to know how to write SQL. I can just ask questions that connect those dots. What was my HR expense? Where is it going? How does my additional expense based on my recruiting plan look like? I mean, these are the kind of things that today you ask that question, whether it's in the MBA or accounting world, it's like, okay, fine, I'll get back to you.


[00:34:31] Guest: Jason Pikoos: Give me three weeks and then you ask a follow up question. It'll be three weeks later. So that's where I get super excited about this. When you really, really look at this amazing technology stack that we have, what we can do together. But it does. And I really think this is a key point, is people need to kind of push the boundaries of what we think is automatable today. You have to push those boundaries out. So much more can be done that art of the possible, just that curve just shifted. So we really need to challenge ourselves about what we believe we can do. And we need to focus on the combination of technologies. So many people are looking for that silver bullet, and I just don't think you're going to find that one thing that does everything. Look at your stack, bring AI into that stack, and then challenge yourself by how much more you can do, because it should be. I mean, I'm very excited about where we can actually go with all of this.


[00:35:23] Host 2: Glenn Hopper: Yeah. And I know and I know, Paul, we are getting a little long, but I want to go. I want to talk about get Jason get your thoughts on. So, Satya Nadella and Marc Benioff have been talking a lot lately about agents. And everybody's saying 2025 is going to be the year of agents. I was saying 2024, I missed it, I was overly optimistic. I thought we were going to have more progress than agents last year, but it didn't happen. But I wonder. So Satya Nadella talked a couple of weeks ago about SAS as we know it is dead because, you know, all these platforms, all these tools are just, you know, they're a database with a skin over the top where it lets you interact with it. But when the interaction becomes, it can be, you know, where we are with technology now, it could be typing. Like Benioff loves the idea of doing everything through slack, kind of like my Microsoft Teams, that it can be typed, it can be spoken, it can be image.


[00:36:20] Host 2: Glenn Hopper: When you can interact with all this data, with generative AI through these agents, it's very exciting for the user to think of where it's coming. And I'm going to put you on the spot here. To me, as an auditor, I don't if I don't even have like if people are just talking and interacting with data through something that is not like an ERP system where I can pull reports and, you know, I'm getting data different ways it that suddenly becomes really scary because it really does feel like black box magic, like, I don't I mean, do you even my question there is probably as vague as the, as that future state that Benioff and and Nadella are both talking about, but I don't have any idea how the PCAOB approaches that when it's just all these interactions are not. I run this report, here's the rules of the report. And it turns to I'm using my natural language and I'm asking a question, and that the software system gave me this answer I don't know. I mean, do you have any thoughts around that or. Yeah.


[00:37:22] Guest: Jason Pikoos: Yeah, it's a great and I saw that article as well. I, you know, I don't know how far into the future they are looking. I think that we are some way away from getting rid of SAS as we are. I mean, even nowadays, you know, we've collaborated with several companies on building AI. You find that still, even in its current form, has many, many limitations and weaknesses. It is not as some sort of magical that you can throw anything at it to get the right outputs, to get the right formats. You know, you still need that, that user interface layer. The reality is to get even users to be trained enough to know how to purely chat with an AI bot to do everything. The reality is, users are still. For decades, we've been trained to follow a sequence that's built within the application. Just saying. You can chat and ask anything. You know what? Most things happens. I can tell you from our experience, we rolled out a chatbot, only 76% used, and of that 76, 24% of those people use it less than once or twice a month. So to say that AI is the chat and everything, I think we're still some time away. We need that layer of software. We need people to be guided technology. So a gentleman called, I think he's a professor at Wasserman from MIT, and he says technology changes quickly, organizations change slowly.


[00:38:35] Guest: Jason Pikoos: And that's super true. So, yes, will we maybe get to that point? I do think we're quite far away from that, not because of the necessarily the technology, but because of the users and the people. There's an enormous amount of investment. The AI tools are not that powerful. I think we some time away from that. Um, and you're right. I mean, there's just so many legal and regulatory and tax and audit questions that we have to think through before we can just. Okay, now everything can be done in a chatbot, especially given the fact that the tools are not really anywhere near ready to be able to do that. But on the other hand, I do think it's opening the door. I've actually spoken just recently to two companies that are rolling out brand new ERPs. This is amazing, right? I mean, and these are not like, you know, one of them got over 100 and, you know, $100 million in, in cash injection. So these are serious Yes, players where there's a lot of money going behind them. So this tech evolution, I think what we will see on the other hand, is the emergence of a lot of new technologies and a revolution because it is, we are finding that as some of the existing techs struggle to embed effective AI into them. So it's a great opportunity for new companies to come up. So we'll see.


[00:39:43] Host 1: Paul Barnhurst: A lot of new ERPs cropping up over the last few years. Yeah, I have a lot of the CEOs of some of these different tools, you know, reaching out to me, whether it's campfire or relic or puzzle or I know there's others out there, but those are a few in mind.


[00:39:58] Guest: Jason Pikoos: Which is super interesting. So, I think it's an exciting space, but I think we're a little bit away from the death of it. But who knows, maybe 10 or 15 years? That might be true. In the next five years I'd be quite surprised. I also think that, you know, we'll see agent usage in 2025, but man, just judging by how slow things move in 2024, my gut feeling is it's going to be more of a year of the learning the agents rather than the adopting the agents, because I think it's just that's once again, there's a lot of embedded and a lot of history there. You don't just, oh, I've got a great technology to change everything. I think it's going to be a little bit of time.


[00:40:31] Host 1: Paul Barnhurst: So slow is slowing down our excitement. Glenn. Should we? Yeah, I'm all for it. I'm all for it. I would love for.


[00:40:37] Guest: Jason Pikoos: Us to move quicker. I keep, like, talking to a lot of companies.


[00:40:39] Host 1: Paul Barnhurst: Let's go. Let's go. Yeah, yeah. Totally kidding.


[00:40:43] Host 2: Glenn Hopper: You know, seriously, though, like, so, you know, there are the, the realists out there, and then there's Sam Altman and Dario Omodei, the, the CEO of anthropic. You know, Sam Altman's blog post where he talked about everything from we might be living in a simulation to we know how we're going to get to AGI to all that, like, like, you know, hearing those two hype it up, they're the biggest hype men. I mean, it's their job. They have to raise, you know, tens of billions of dollars to fund all the training of these models. But, you know, to hear them, it's like, oh, we're New World Order in six months. But I think, you know, the truth is probably a lot closer to your number than what they're saying. But I love that they're out there doing it. And, so they've.


[00:41:26] Host 1: Paul Barnhurst: Taken a page out of Elon Musk's book. Cybertruck will be ready in six months for you. We'll all be driving self-driving cars by 2017, right? Or whatever it is. Visionaries often think it's going to be a lot quicker than reality, which we need because they push to get there. It's just rarely as quick as they think. All right. Well, we've got a little over time. So we're going to do this really relatively quick. What we did we took the questions. We took your profile from LinkedIn. We fed it all into ChatGPT and said for it to come up with some fun and interesting questions, I'll ask the first one. Here's how it works for me. Glenn does it a little different. You got two options. You can pick a number between 1 and 25, and you get that question. Or the random number generator can pick a question between 1 and 25. And you get that question.


[00:42:16] Guest: Jason Pikoos: Let's go. Full take. Go randomize them.


[00:42:18] Host 1: Paul Barnhurst: All right. No human in the loop on this one. Then you got number 22. Let's see what 22 is. Wow. This is a pretty good one. What's a particular personal achievement you're proud of.


[00:42:35] Guest: Jason Pikoos: Oh it's particular personal achievement I'm proud of. You know. So I joined a group. This is going back a little bit. I joined a group back in 2012. The firm was about 70 people, very focused more on the technical, accounting and IPO space. And I was hired to build out the finance transformation arm of the business. And over the next decade, we built out a team. We ended up being like 120 people. We achieved significant growth. And it was that, to be honest, it was probably my number one and most exciting achievement to have built such a powerful, you know, service offering and a great team. And, you know, and obviously it set the stage for continued growth. We've kind of done some internal reorganization over the last few years, which any organization does when they get to a certain size, but just having the opportunity to build something up like that to achieve such growth, to frankly work with some of some of the most amazing people is probably what I'm most proud of. At the same token, I have to moderate what I was a contributor to that. The reality is, the success of those type of things are, and those endeavors are the team we had. I just had an amazing team to work with and to collaborate with. But that's probably the number one thing I would probably put on the top of my list.


[00:43:40] Host 1: Paul Barnhurst: All right. Great. All right, Glenn, your question.


[00:43:43] Host 2: Glenn Hopper: Yeah. So my approach is Paul uses the random number generator, but we've already got the questions in ChatGPT. So I just tell it to tell me which one to ask. So there's no human in the loop at all on this. We're just. And actually, I need to go. What I should do in the future is go, whatever they call that, the mode where you can interact on your phone directly with it. I'll just have ChatGPT ask the question. I can just sit back and listen. But let's see what ChatGPT comes up with here is okay, let's do a good one. I think depending on how exciting your life outside of work is, let's see what is a hobby or interest that you have that might surprise people?


[00:44:18] Guest: Jason Pikoos: Oh good one. I think of myself as an amateur mixologist. So I love a good cocktail. I'm quite a cocktail snob, to be totally honest. So I think of myself as an amateur and a decent amateur mixologist. So I like to experiment with new various cocktail mixes, and I typically do those typically over the weekends, reserve my drinking for the weekends. That is probably something that people probably don't know much about, and I really enjoy that. It's a fun way because I like cooking as well. But this is something that is a really fun way to play around with flavors and tastes and try new things. So that's probably something.


[00:44:53] Host 2: Glenn Hopper: Super exciting for me. I was on the seven year plan for my undergrad degree, and during that seven years I worked as a bartender, so I always fancied myself something of a mixologist. But the funny thing is, I don't make cocktails for myself. Very rarely I make them for my wife and for when people come over. I'm just drinking straight. Bourbon is all right now. I'm drinking straight bourbon. It's just. But I love making cocktails. I love serving them. But, for me, I'm just drinking bourbon, maybe on a big ice cube. But yeah, that's a whole other podcast episode. Yeah, it's not about.


[00:45:24] Guest: Jason Pikoos: Just throwing the stuff together. It's about adding a bit of smoke, a little bit of an appeal from an orange cherry. I mean, all of those things make a difference.


[00:45:32] Host 1: Paul Barnhurst: Well, fun, I love it. We'll have to come visit sometime. And, you know, you guys can do some mixing. Yeah. All right. Well, thank you for joining us, Jason. We really enjoyed chatting with you today. It's been a lot of fun and we look forward to our audience getting to hear what you had to say.


[00:45:46] Guest: Jason Pikoos: Fantastic. Well, thank you guys for having me. And I look forward to watching this.


[00:45:50] Host 1: Paul Barnhurst: Thanks for listening to the Future Finance Show and thanks to our sponsor, qflow.ai. If you enjoyed this episode, please leave a rating and review on your podcast platform of choice and may your robot overlords be with you.

Previous
Previous

How CFOs Can Slash IT Waste and Maximize ROI with an Efficiency Mindset with Ben DeBow

Next
Next

Future of AI and Quantum Computing for Finance Leaders to Scale Fast in 2025 with Glenn Hopper and Paul Barnhurst