Use AI to Streamline Tasks and Learn New skills with Adam Shilton

In this episode of Future Finance, hosts Paul Barnhurst and Glenn Hopper sit down with Adam Shilton, an expert in digital transformation, automation, and AI-powered workflows. The discussion focuses on how AI is changing the way we work, the risks of synthetic data, and how finance professionals can harness technology to become more efficient. Adam also shares his personal journey from corporate FP&A to running his own business, along with an overview from his recent TEDx talk.

Adam Shilton is the founder of Shilton Digital, a TEDx speaker, and a digital systems architect specializing in automation, AI tools, and no-code solutions. With a background in finance and technology, he helps leaders streamline workflows, improve efficiency, and replace manual work with meaningful work. He is also the Head of Partnerships at GrowCFO, supporting SME finance leaders, and a course facilitator for AI and automation.

In this episode, you will discover:

  • The role of AI in finance and why understanding data science is crucial.

  • The risks of synthetic AI training data and its potential long-term impact.

  • How Adam transitioned from corporate finance to running his own digital business.

  • The best ways to integrate AI into workflows without losing human creativity.

  • Why learning foundational skills (like Python or Excel) still matters in the AI era.


In this conversation, Adam Shilton highlighted the importance of using AI as a supporting mechanism rather than a replacement for human expertise. With the risks of synthetic data and the value of foundational skills like Python and Excel, this discussion explains the need for adaptability in the digital age.

Follow Adam:
LinkedIn - https://www.linkedin.com/in/adamshiltontech/
Website - https://www.adamshilton.com/

Join hosts Glenn and Paul as they unravel the complexities of AI in finance:

Follow Glenn:
LinkedIn: https://www.linkedin.com/in/gbhopperiii

Follow Paul:
LinkedIn: https://www.linkedin.com/in/thefpandaguy

Follow QFlow.AI:
Website - https://bit.ly/4fYK9vY

Future Finance is sponsored by QFlow.ai, the strategic finance platform solving the toughest part of planning and analysis: B2B revenue. Align sales, marketing, and finance, speed up decision-making, and lock in accountability with QFlow.ai.

Stay tuned for a deeper understanding of how AI is shaping the future of finance and what it means for businesses and individuals alike.

In Today’s Episode:

[01:46] - Introduction of guest Adam Shilton
[02:48] - The Rise of AI in Podcasting
[06:11] - The Problem with Synthetic Data
[13:44] - AI in Financial Analysis
[17:20] - AI as an Augmenting Tool
[25:01] - AI for Learning & Skill Development
[29:04] - Should Finance Pros Still Learn Excel & Python?
[38:19] - Adam’s TEDx Talk
[41:17] - From Corporate Finance to Entrepreneurship
[44:23] - If Your TED Talk Had a Theme Song?


Full Show Transcript

[00:01:46] Host 2: Glenn Hopper: Welcome to Future Finance. Today we're thrilled to introduce our guest, Adam Shilton. Adam's a trailblazer in digital transformation and automation. He's the founder of Shilton Digital. He's a TEDx speaker and a digital systems architect who helps leaders and entrepreneurs replace manual work with meaningful work through proven digital systems. With a background in finance, tech No-code automation, and AI tools, Adam has built a six figure business units as a solo operator, improved team output with connected systems, and grown a following of over 25,000 currently. He's also Head of Partnerships at grow CFO, supporting a vibrant community of SME finance leaders and a course facilitator for AI and automation. Join us as we dive into Adam's journey, insights and practical strategies for leveraging technology to achieve more with less. Adam, welcome to the show.


[00:02:40] Host 1: Paul Barnhurst: Thanks for joining us on Future Finance. Adam, we're thrilled to have you. We're going to go ahead and just jump into the questions we have today. You know, first one I was going to ask is what happens when you have three podcasters in a room? But ChatGPT answer sucked. I couldn't find any good jokes online, so we'll move on to a real question. If you know of an AI that can actually tell a joke, let me know, because I'll. We'll try it.


[00:03:03] Host 2: Glenn Hopper: Yeah, because God knows, since generative AI came along, we've all just given up thinking if I, if ChatGPT can't do it, I'm not going to address it.


[00:03:10] Host 1: Paul Barnhurst: Yeah. Yeah. Pretty much. Right. Our AI now thinks for us.


[00:03:15] Guest: Adam Shilton: That's the future, right? We're going to use AI for all of work, and then humans are just going to be stood around telling jokes, because it'll be the only thing that we can do.


[00:03:22] Host 1: Paul Barnhurst: Well, I know a guy that.


[00:03:24] Host 2: Glenn Hopper: Some of us.


[00:03:24] Host 1: Paul Barnhurst: Because we're all podcasters. We'll talk podcasting for a minute. Hope for our audience humors us. He's started a podcast. He's done about six episodes, but he spent all day having his voice cloned, and he just writes all the texts, feeds it into this tool that takes his voice cloning. And it also comes up with the guest and does the whole podcast, and he's trying to release it on different subjects. He's done about 6 or 7 episodes that way, where there's no actual guest on the other side, and he's not actually the one talking to show anything to do with AI. I, so I'm going to listen to one of his episodes. I talked to him yesterday about it. He's actually in your neck of the woods over in. He's in the UK.


[00:04:01] Guest: Adam Shilton: Have you heard of Rob Lennon?


[00:04:03] Host 1: Paul Barnhurst: I have not.


[00:04:04] Guest: Adam Shilton: Check out Rob Lennon. He created an AI podcast host. I can't remember what he called it, but yeah, he did it in reverse. So he created somebody to talk to as opposed to creating something to talk for him. That's quite interesting. Have you tried Google's notebook yet?


[00:04:18] Host 1: Paul Barnhurst: Yes, I released an episode using it. Amazing. I haven't tried it. I know they've updated it. Now. You have more features. I haven't tried it again recently. I need to play with it again.


[00:04:27] Host 2: Glenn Hopper: I've seen a lot of people. This is so lazy, but a lot of people releasing notebook LM podcasts or on YouTube without saying notebook LM did this without even acknowledging just having those two podcast hosts go through and do their thing and just presenting it just, I guess, for something to click on to, to drive views or whatever. But I mean, it's pretty amazing. That was Notebook LM honestly was maybe the most impressive. I mean, I know we made incremental gains in the LMS and all that, but no book. Lm was the single most impressive AI tool I saw come out last year.


[00:04:59] Guest: Adam Shilton: It's incredible, I did, I put it to the test and we're digressing a bit here.


[00:05:03] Host 1: Paul Barnhurst: How good this is about AI and tech. This is what we want.


[00:05:06] Guest: Adam Shilton: There we go. So I was experimenting and I was going through a bit of a project to compare some of the mid-market ERP systems. So I had like SAP Business One, there was Sage and dynamics and so on and so forth. And I basically fed it my own knowledge, a load of the promotional material from the various vendors. And I got notebook LM on it and I said, look, you know, have a discussion and talk through best suited companies, best suited industries for these tools just to see how accurate it was compared to my knowledge or lack of knowledge, whichever way you want to put it. And I was very impressed. You can tell, like there were some awkward moments where it was trying to come across as a conversation, but it just kind of didn't really work. I found that also the two eyes that it created talk to each other really quickly. I don't know whether you noticed that, but there was literally no breath between the talking.


[00:06:05] Host 1: Paul Barnhurst: Wait, you mean humans breathe?


[00:06:07] Guest: Adam Shilton: Yeah, and I thought it did a really good job. But here lies the issue. It kind of tried to guess at some of the stuff that I'd asked. So I said try and add some perspective, and I don't think I said make any assumptions or anything like that, but some of the stuff that it said was 100% wrong. So to your point, Glenn, about people just setting these AIS and creating podcasts, it's a risk, right? And I sort of reeled a little bit because I don't know whether you saw but meta, you know, like the Facebook WhatsApp guys, they I think mentioned that they're going to release the ability to create an AI avatar profile. Now, that's not you. That's not an avatar of you. That's a standalone. Like a no faced AI that you can get to create an influencer profile or whatever. So yeah, I don't really know what to think yet, but it's pretty scary to know that there's now a ton of content being produced that's unvalidated. Could be wrong, and a lot of people are going to get sucked into it and take it as gospel. Right. So there we go.


[00:07:18] Host 1: Paul Barnhurst: Well, it's like you guys probably seen the list. I know we talked about this in an earlier episode. I put the list of 16 of the worst things that Google's AI had said, you know, told people to do, like, you know, add glue to your pizza sauce. If your cheese is not sticking, you know, put rocks in your food, just some horrible stuff. One of them even asked about human sacrifices, and it basically came back and listed the pros and cons like, there should be no pros for human sacrifice. Like you should have built that into your ethics of the tool like, you know, so it is a crazy world. I mean, the other thing, and I'd love to get kind of your guys's thoughts on this is, you know, we're starting to see AI ingesting its own content, right? A lot of the AI is getting fed its own content. So incest in a sense, like how dangerous is that? You know, you think of all this content coming. Are we going to start to, you know, have groupthink, some of those things that we talk about? I'm just curious on your take on that, Adam, just kind of your your thoughts on your concerns there because you mentioned a little bit of, you know, wrong information and all this content.


[00:08:27] Guest: Adam Shilton: So I'd be really interested to get Glenn's thoughts on this as well, because it's something that I have thought about. So OpenAI released one last year I think back end of last year, wasn't it? Or midway through I get stuck on the timelines because it moves too far. But 01 was that like the thinking model, right? You know, it would pause and say, oh, let me have a think about this and then tell you what he was thinking. And then I heard and again, I can't back up the source for this. So I'm no better than an AI, right? But I saw somewhere that OpenAI were investing a huge amount of resource in training their newer models with synthetic data. So that's where they couldn't get enough data from either public domains or whatever to train the model. So they were getting 01 to create synthetic data to train the next generation of models. So my perspective on that is right. Well, A who's validating the synthetic data and proving that it is useful and relevant. And I guess, you know, there's plenty of intelligent people at these companies, right? I'm sure they've got that sussed. But yeah, the B then relates to your point, Paul, about how, how is there going to be any sort of validation on good versus bad data, you know, and you take that further because all of the models are going to start ingesting more and more public data of the the poorly produced AI data that's online. Yeah, and I don't think anybody's got it figured out, which is, again, why I'm keen to get Glenn's perspective on that, because data data is fundamentally integral to the way these models work. And it always goes back to that phrase that I'm sure many of your podcasts guests have said, which is rubbish data in, rubbish data out. So yeah, it's an interesting one.


[00:10:14] Host 2: Glenn Hopper: The funny thing, okay, so we have three podcasters on right now. We have planned questions. I feel like we may not get to any of them, but I think that's okay. Paul and I just did a year end episode where we just yammered on for about, like, Lex Fridman podcast link down hours and hours.


[00:10:28] Host 1: Paul Barnhurst: We're now got three podcasters that are like AI and we're screwed.


[00:10:34] Host 2: Glenn Hopper: Yeah.


[00:10:35] Host 1: Paul Barnhurst: Time's gone out the window. I hope you didn't. You don't have anything planned for today, Adam.


[00:10:40] Guest: Adam Shilton: No, no, I mean, I've got to eat at some point, but, hey, I can skip that.


[00:10:45] Host 1: Paul Barnhurst: Throw it in the mic. I mean, bring it into the room. Glenn and I will keep it going. Will you take care of that?


[00:10:50] Host 2: Glenn Hopper: Yeah. So my thoughts on synthetic data and training models are. So first off, digital assets are pretty amazing because they're things like think of an image. You create one image, it's just ones and zeros. And you can use that image. You know it's not like if you Xerox copy something and make another copy of a copy of a copy where I'm thinking about what was that old movie, with Michael Keaton, multiplicity, where he kept making the he made a clone, and the clone made a clone of itself and all that work, but with, you know, with, with digital anything, it's just ones and zeros, and you can have as many copies of it as you want. So that kind of changed the nature of the value of digital items, because anybody can have it. There's no scarcity to it. And when we digitize so much now, we can train these models on. I mean, if you think about what these models are trained Done. It is the. I don't know what I could be overlook here. It's the sum of human knowledge. So anything that has been put down in word and then digitized, which, you know, we've had the internet now since 95 and then plus whatever universities were doing before that. So you've got to think, well, I don't know what's out there. Things that maybe haven't been translated yet or whatever, but what's out there in human knowledge, it hasn't been digitized.


[00:12:07] Host 2: Glenn Hopper: And so these models to train them, they, you know, once they started scaling up and realizing how much better the models got, if you think back to the early versions of the models of, like Bert that Google was using or some of the earlier ChatGPT models, they were not very good, but they found amazing improvements when they scaled up. But then now they're you know, the compute is a whole other thing. It's why they're having to raise all this money. But there's only so much data that's out there. So if you maximize what you're doing with the data, that's Why we've gone from, you know, ChatGPT 3.5 to 4 to 4.0 and we didn't go to five. Now we went to a whole new one. And then oh, three, I guess that they're testing now. But, you know, I'm an optimist with AI, but I do tend to agree with Yann LeCun here, where, you know, the LLMs are not going to be AI on their own. We're already there's no more data that we can use to train these. I mean, you know, over a trillion parameters reading basically the entire internet, you know, what more can you do? Oh, you can add synthetic data. Well, I started off talking about the sort of the digital, you know, that you can repeat things and use them over and over, but you're not going to get better. I guess it'd be bootstrapping kind of if you're reusing data, but then you're picking sort of the mix and you're injecting bias into the model by what data you're selecting or if you're using synthetic data.


[00:13:44] Host 2: Glenn Hopper: I teach a lot of courses on using generative AI, and usually I'll use public company data. But if I'm doing like if I'm trying to do something that's not like a public financial statement, if I'm trying to do like management reports or internal financial statements, I don't want to use real company data. So I'll have created a lot of synthetic data myself. And it's, you know, you can go through and here's my chart of accounts. I want to create an income statement. It needs to follow roughly this trend or whatever. And it'll throw out, you know, the revenue numbers and the Cogs numbers and then, you know, various expenses. But if you have to engineer it so much to tell it, okay, you know, the expenses need to stay in this range. We're spending whatever on R&D and all that. But if you then try to use that synthetic data, which I've had this come up in a live course where I created three years of, of an income statement, and I was going to do forecasting with it, but the data that was Created. So first off, the first pass through the cogs didn't match the revenue at all. So it was like this is not a realistic financial statement. I mean like this ratio is off.


[00:14:48] Host 2: Glenn Hopper: So then I would tell it okay, the cogs need to be in line, but then it would lock in okay. Cogs are always 40% of revenue. So you're trying to forecast. So the data is just not that it's not smart enough yet. So and this is numerical data. When you're creating words you know that could be wrong and you're training it on it. There's just it's fraught with problems. And I don't know you're either going to inject bias or you're going to inject misinformation or what. I just don't know how the synthetic data would make these models better, or the tendency of all these models, their output already is so watered down and cliche. That's why I would love it if I could just use ChatGPT to write my next book. But the content is so anodyne and just sterile because it's it's just it's by nature it's a It's a cliché because it's taking what people write all the time, and it's giving you this most statistically probable next word, which means the most cliche, unoriginal next word. So how are you going to make the model better? By filtering it through a copy of a copy of a copy of a fake text, and it's just going to it's going to take any originality or uniqueness out of it. And I just I don't see that being the path forward for making these models better. Good Lord, that was long.


[00:16:03] Host 1: Paul Barnhurst: That took a few minutes. But what I took away from that is don't feed humans sterile stuff. Don't feed AI sterile stuff. Let's keep it creative and help it actually learn. Like, I get what you're saying. I mean, right, the synthetic bias. Okay. If the models can't give great answers now, are we sure feeding it synthetic data. I mean, yes, they can give some good answers, but they make plenty of mistakes. And if things like what's the risk to it? It sounds like I think around the Horn, we all have some concerns with it being trained on synthetic data.


[00:16:37] Guest: Adam Shilton: I think there's a gap. So for me, in my day to day usage. My problem isn't the data. You know, my problem isn't necessarily the accuracy of the response. It's how it understands what I want to achieve. So maybe that is the premise behind synthetic data. That, you know, if it is trained with more stuff that doesn't exist, maybe it's going to understand us better, be able to infer more context straight away. But for me, it's the repeat process of trying to get the results that I want. That's the issue, not the data that it's throwing out. It's throwing loads of data at me. It's the quality of the output that's lacking. So I mean, maybe somebody.


[00:17:20] Host 1: Paul Barnhurst: Could use that data to accomplish something. Is it going to make you more efficient, more effective? Is it going to help you do what you need to do, or is it just throwing a bunch of stuff at you that you have to clean up?


[00:17:30] Guest: Adam Shilton: Yeah, I mean, to use Glenn's example of the book, right? And, I mean, I produce a lot of content. I write newsletters and that sort of stuff. And I've tried, like, I've tried giving it endless examples of my previous content to try and emulate something in my voice. And, it's just missing something. And sometimes I find it's just faster for me to do it myself than it is to fight with the AI in trying to produce something that's similar. Right? But yeah, I mean, for me it's that understanding first time it's getting me. So I think that's the real challenge there is. How do you properly create an AI assistant that knows you, whereby you can have a conversation like you're having with a human without feeling like you're always having to trick it or, you know, develop a more intelligent prompt or whatever it happens to be. So yeah, that's for me. It's like it's less of a data issue. It's more of a how does it understand me better? That's that's my.


[00:18:26] Host 1: Paul Barnhurst: Context. It's the reasoning. It's the stuff that makes us human. If we were all data to use a Star Trek character reference. It wouldn't be a problem, right? But it struggles to because it's just statistical probabilities. That's really hard to put uniqueness in that. I mean, you can't. Now they're finding ways to augment it. And there's things they're doing, you know, more rules or whatever to try to adjust for that. But I think we're a ways away from it, truly getting all that nuance. It could do a good job, and it often can save us time. But like you said, does it really capture your voice as well? Like Glenn, I'm sure you used I some with your book, but I'm going to guess you definitely didn't let it write paragraph after paragraph without a lot of editing.


[00:19:13] Host 2: Glenn Hopper: Yeah. You know what it was really good at? And actually, I need to, like, watch how. If we had the fireflies breakdown of this call, it would have like Glenn spoke for 16 minutes. Paul spoke for a minute and a half. Adam spoke for 30s. He's the guest. And it's like, no. So on my book, I will say rewrite for clarity was actually a really good prompt that I used, and I would go back and forth between Claude and ChatGPT. Claude used to be a better writer. Claude, Opus 3.5. I think ChatGPT is about caught up with it there, and they're both good at what they are good at is outlining and consolidating information. So if I knew Paul knows I have a tendency to ramble. If I was rambling in my writing, I could say I've gone off the deep end. Here, reel this in. You know, give me make this concise or whatever. And I could take edits that way. But if you tell it to write anything, like a lot of my content is similar and I think, oh, I'd like to publish this here, but I'd also like to publish it here. So even trying to get it to rewrite something, it's not like it's just it doesn't sound right. So but if I can say, hey, if you were going to write an article about this, you know, how would you do it? So if it gives me an outline, I can sort of it's not getting me 80% there. But if it saves me 2,530% of time of writing an article, then that's still a huge efficiency gain.


[00:20:36] Host 1: Paul Barnhurst: Yeah. And I hear you, I use it for summarize a lot of transcripts for LinkedIn posts, and I read a lot of it and like, hey, nobody's going to believe that's my voice. Like, I got to tweak this one a little bit. And there has been a few times where I've actually released it, probably 99%, but most of the time I'm rewriting at least half of it. Ever feel like your go to market? Teams and finance speak different languages? This misalignment is a breeding ground for failure in pairing the predictive power of forecasts and delaying decisions that drive efficient growth. It's not for lack of trying, but getting all the data in one place doesn't mean you've gotten everyone on the same page. Meet QFlow.ai, the strategic finance platform purpose built to solve the toughest part of planning and analysis. B2B revenue. Qflow quickly integrates key data from your go to market stack and accounting platform, then handles all the data prep and normalization under the hood. It automatically assembles your go to market stats, make segmented scenario planning a breeze, and closes the planning loop. Create airtight alignment, improve decision latency, and ensure accountability across the teams.


[00:22:01] Guest: Adam Shilton: And that's what I find. So a recent trip that I've been using and I didn't know why I didn't do this before is but before I publish anything is to read it aloud. And I went through some old posts that I did that were AI assisted, shall we say. And that was where there were a couple of gaps filled, or like a portion that was added on because I needed an extra point in my listicle post or whatever it happened to be. You know what I mean? I read it through, and I was like, that doesn't sound like me, you know? So I think a good validation is to read aloud, because if you stumble across words when you speak them, it says a lot. And that's another trick that I've actually found helpful, is if I'm struggling to have a decent conversation with, I might switch to voice, because actually you find when you talk you speak more clearly than you type. Yeah. And coming back to Glenn's point on writing for clarity, if you can be clear in your instructions, that's half the battle. Yeah. So I think the voice thing is relevant. But coming back to that, you know, we were talking about humor and AI's inability to, like, write jokes and be witty. I was having a look in the open AI playground. I don't know whether you guys have ever been in there. If you go in, I guess it's OpenAI's, like, developer behind the scenes type thing where you can test your prompts and that sort of stuff, and you can go in and you can test, like, standard text prompts, but they've now got an area there for voice. So you can trial training your own voice bot, which is pretty cool.


[00:23:28] Guest: Adam Shilton: And when I went in there, I don't know whether it was deliberate or not, but it had like the default prompt for a voice chat bot. And it said in your response, try and be witty is not. Yeah, that's that. But yeah, I think clarity is there. And I think just coming back to Glenn's time savings there in terms of the, you know, the 20 or 30% time saver, I think it's great. But what that stress is there is it's a supporting mechanism rather than an entity in its own right. And that's what I find is that when you switch the mindset to say, right, well, instead of getting this thing to do something independently, I'm going to use it to improve or help instead. So it becomes a part of you instead of a separate entity to you. I think that's a good way of thinking about it. So. So when I do training, I tend to split it into three categories. So doing, thinking and learning. So I think how can I help me do stuff. So improve speed of workflow, that sort of stuff. How can I help me think? So how can it encourage me to see a different perspective? You know, is there a different mental model that I can use for this? That is just a blind spot in my thinking that I don't know about. And then number three, which is how can it help with learning? And I'd say that's really underused because I'll use the example of Python. Right. So last year I've done posts about it. I've never run any Python code in my life. I'd never run any VBA code. Shock, horror. Sorry guys, I just hadn't done it right.


[00:24:53] Host 1: Paul Barnhurst: We can only have VBA people on the show.


[00:24:57] Host 2: Glenn Hopper: Click, click.


[00:24:59] Guest: Adam Shilton: No, it's fine. But if you just ask, like a crappy question. So, can I do this with Python? Oh, yes, of course you can. Like, I'm. I'm ChatGPT. I'm Claude. I can do everything. Here's the code. Here's three pages of code right now. It does it right. But then you look at that and think, okay, how do I deploy that? So the trick for me was like getting over that barrier of admitting that I was an idiot in some areas and using the explain it to me like I'm an idiot, you know? And even then you still had to say against it, come down and say, oh, you just need to do this. I'm like, no, that isn't clear enough. Like, tell me what step one is. Like, I am a complete. Just tell me what step one is. And as soon as you.


[00:25:49] Host 1: Paul Barnhurst: Enter my.


[00:25:50] Host 2: Glenn Hopper: World, I say talk to me like Paul talks to Glenn. That's it.


[00:25:57] Host 1: Paul Barnhurst: Wow.


[00:26:00] Guest: Adam Shilton: I think I think our best leaves that you guys can hash it out. But. And that was that. So every time I got a response that I thought, that's too technical. I can't do that because I think that's one of the limitations as well, especially if individuals don't class themselves as technical. And I still mean, I'm more technical now, but a year ago I wasn't. And instead of walking away, which is the default reaction, right. So you've got this problem like you've asked AI to solve it for you. It's not really giving you anything logical. Not logical, but anything implementable or practical to go by instead of walking away. Just saying again, no, we need to break this down even further. And I think that's the trick. It's what's that smallest step on that learning journey. And if you can master that and keep at it and keep ensuring that you know there is a next action and that you are getting something out of AI which you can test and validate and move on to improve your learning. That's really useful. And that's exactly what I did with Python. So I started with, right, well, what tools do I need? Yeah. So tell me what I need to download, tell me what I need to set up and so on okay fine I've done that. Now I've run a really simple like Hello World piece of Python code or whatever. Okay, now I want to do this. Okay. You need a library for that. Okay. What's a library. Talk me through a library. Yeah. So and then that supplemented that bit of my knowledge. And then I can test that, because I can install the library and I can run the slightly more advanced bit of code and see it working. And I suppose that's the one thing that's nice about code is either it works or it doesn't.


[00:27:30] Guest: Adam Shilton: Either it produces the output or it doesn't. Yeah. So it's kind of structured in that sense. It's not the same as like writing a book or producing text where there's that creative variability that you can really start going round in circles with. So yeah, that that third point in the pillar in terms of learning I think is really underused, because if you can crack that, you know, and you can start getting behind the scenes and becoming experts in stuff that's going to save you even more time, because, yeah, AI is going to save you time. But if it can teach you how to use the tools that are really going to save that time, then you're better off investing your efforts there. And what I found is that actually, as soon as I learned python or whatever the skill was, my reliance on AI went down because I didn't have the requirement for it to need to save me so much time. I didn't need to use AI to do the doing because I'd already built an app or a code or a workflow or whatever. So I think that's a trick that people often miss out on. So yeah, the key takeaway there, I guess, is work with it rather than try and outsource to it. And I think outsourcing is going to become interesting. I don't know whether it'll be 2025, but obviously we now have the concept of agents that are coming in with, you know, the AI is able to carry out stuff more autonomously, but still very early stages on that jury's still out. I'm not sure what to make of them yet, but anyway, the risk of speaking too much, I'll pause.


[00:28:53] Host 1: Paul Barnhurst: No, you got to make up your portion. You know what would at least be 15%? So Glenn's only 60. I knew I'd make Glenn laugh. No. Yeah. I'm going to say one thing there. You hit on something that Glenn and I have talked quite a bit about with different guests, and that's the whole thing with, you know, I, a lot of people are like, well, I don't need to learn Excel anymore. I don't need to learn Python. I can do it. And I always say, people who understand whatever it is they're asking AI about will be more efficient. Using AI to help them to augment. Augment. They can check it. Do you agree with that? It sounds like you do. Do you think we're missing out if we don't at least learn the basics on many of the things we're asking AI to do?


[00:29:35] Guest: Adam Shilton: I think it depends on the complexity of the task.


[00:29:38] Host 1: Paul Barnhurst: I think there's some truth to that. So elaborate a little bit.


[00:29:40] Guest: Adam Shilton: So you guys probably follow Ethan Mollick. Yeah. So it really annoys me. Co-intelligence is the name of his book. I haven't read it yet. I can't remember exactly what he said in his post, but he referred to in my mind, essentially a threshold that says we've already seen I take over low level activities to the point where a human is removed from the equation. So the complexity of task is low, whereby a bot or an AI workflow can tackle that. So we don't need a human doing that anymore. But of course, there is a threshold now that says, right, well, this is a complexity of tasks that still needs a human or a human plus AI to carry out that work. Yeah. So I'm trying to think back to your question now, Paul.


[00:30:28] Host 1: Paul Barnhurst: It was really around like, you know, Python, Excel or things. A lot of people say, oh, you don't need to learn it anymore.


[00:30:33] Guest: Adam Shilton: Absolutely. Yeah. I can create a simple formula like, you know, it might be able to produce a basic financial model. I don't spend a huge amount of time in there. Sorry. Again, don't shoot me for saying that. But there is still a level. And there will be a level for a while whereby getting better at using the tools, getting better at the principles will be warranted, especially at a senior level. So until then, like, you know, there's nothing wrong with the shortcut. Like speed up your learning. You know, speed up your ability to use these tools. Until AI gets to the level of intelligence where they go up to the next threshold, which is then them circumventing the next phase of complexity of work or whatever. But that's where the jury's still out, because the timelines are so unknown, right? So I'd say there is nothing wrong with anybody ever wanting to invest in skills, because a lot of skills are transferable anyway. Like, you learn how to build a killer financial model, you learn how to dominate Excel, you learn how to use Python, that types of systems thinking and digital thinking is going to pay dividends. Even if you have a role change or you decide to change your career path or whatever it happens to be. So think in terms of transferable skills and getting better. It's the only thing you can do.


[00:31:48] Host 2: Glenn Hopper: You know what I think kind of on that line. So if you're going to be using generative AI as a finance professional, you know the right kinds of questions to ask versus if I'm a startup founder and I've, you know, maybe I have an MBA, maybe I don't. This is maybe, this is my first business ever. I don't really know what financial questions to ask. I just know as the owner, I'd like to know these certain things, but I certainly don't know the difference between EBITDA, operating income, net income, maybe not even, you know, gross profit or, you know, you don't you don't understand the financial statements. So if you are a financial person, you know the right questions to ask. So that domain expertise is important. If you are an Excel expert or let's leave Excel out of this. Let's say maybe if you're a Python expert or if you're a coder in whatever language is.


[00:32:46] Host 1: Paul Barnhurst: R, Python, whatever.


[00:32:48] Host 2: Glenn Hopper: Yeah. If then you like if you don't know anything about coding and you ask ChatGPT or Gemini or whatever to write code for you, great. Now I have this Python code to your point. Adam, where do I do with it? I don't, do I? Do I put this in a Jupyter notebook or you know, where can I run this code and you don't even know you can't QC it. And you wouldn't even if it wrote you bad code that there's an error in its assumption. Well, if you don't understand the language of that code, then you don't. You're not going. You're going to miss the error. You could be getting bad data because your initial prompt said something wrong or it was interpreted incorrectly. So I think for the time being, the human in the loop are domain expertise. That's outside of what generative AI can do is going to drive. It's where we're going to add value. And so one of the big things that I pushed for is maybe we all need to learn how to be data scientists, not to write code, not to be machine learning engineers, but at least to understand what data scientists are looking for, to understand covariance matrix or correlations, or to do predictive analytics and to understand these models so that I know the right questions to ask the AI to do. And then that's where you're going to supercharge is when you already have a certain level of expertise and you know what, what questions to ask, then I think that's where you're getting real value.


[00:34:16] Guest: Adam Shilton: And I think coming back to your point there, Glenn, about domain expertise and knowing what questions to ask. So going back to the Python argument again. So I've fallen afoul of this recently. So I generated a really complicated bit of code to help with a data processing exercise. So it was combining a load of messy data from different sources into a cohesive output, which is a common task. I mean, some people do it with Power Query, you know, whatever it happens to be. I don't have knowledge in Power Query or any of that Excel based stuff.


[00:34:50] Host 1: Paul Barnhurst: You can take my course.


[00:34:51] Guest: Adam Shilton: Yeah. That's fine. I'm happy to do that. But for me, actually it sounds silly, but it was quicker for me to run code than it was to learn Power Query. You know, to, you know, because I dabbled a bit at code before, you know, HTML when I used to build websites and all that sort of stuff. So I understood the core principles, I guess. But where I say I fell afoul is because you ask and continue to ask I to improve a code or add to because as soon as you build something, you spot something that you want to add to it, right? You know, it's the same when you're building a financial model or whatever you think, oh, it would be cool if I could do this, and then you end up with this. Like Franken code a developer would look at and be sick all over you, you know what I mean? And that's where I got to. So. So I ended up generating a thousand lines of code that worked because again, like I could see the output. It worked, you know, and it's like when you're at school, right? I don't care about the working out. You know, all that matters is that it's the right answer.


[00:35:49] Guest: Adam Shilton: You see what I mean? But that's an inherent risk in using an AI to say, I want to do this. I want to do this. I want to do this because it is just looking to carry out the task as efficiently as possible. It's not thinking in terms of, ah, but have you got the right foundations? Will this scale? Yeah. If this changes, will that break? It's not thinking in those terms. It just wants to help you with whatever your request is, is if you were to speak to a developer, a developer would say, oh, well, no, no, you've not got your project set up correctly or what you're doing it in one file for. No, you know, you need you know, you need three files and you need one file for your core functions, and you need another file for this or whatever. That's the way that it scales. Never used GitHub, but that's a really good way of, you know, taking chunks out of it and then testing whether that's an improvement or not, and then sinking it back to the main code. And these are all concepts that I'd never done before, because I don't have that domain expertise. And I'm sure in the future, like a specific dedicated AI that is programed to think like a developer will be able to do that.


[00:36:49] Guest: Adam Shilton: But I guess that's the difference between what we have now, which is widely available. General AI, not artificial general intelligence. We're not talking about AGI, we're talking about AI that can do a load of stuff as opposed to being like a niche or a niche use case. I think that's what needs to come next. And I think if I can predict it in 2025, whether it's agents or, you know, whatever it happens to be is with the advent of these open AI Pro models and that sort of stuff, we're going to see more niche AIs for relevant domain expertise because it's impossible, at least for now, to train a general chatbot like ChatGPT to be really good at a specific thing that requires a decent amount of domain expertise. So yeah, a long way of saying be careful of trying to shortcut to the output without building the foundations of that specific domain expertise, because it could come back to bite you in the future, either when you're trying to scale or trying to change something. So yeah, don't make my mistake, because now I've got a thousand lines of code whereby if my data changes, I'm screwed.


[00:37:59] Host 1: Paul Barnhurst: Glenn, I think that was a really good answer. I'm going to give you a round of applause. I just wanted to use my mixer. Thank you.


[00:38:08] Host 2: Glenn Hopper: Yeah.


[00:38:09] Host 1: Paul Barnhurst: All right, well, you know, we've already gone 35 minutes. We only got any questions. I think there's one that we'd love to ask you about. And I'll let Glenn ask the follow up. You gave a Ted talk last year. Tell us about it.


[00:38:23] Guest: Adam Shilton: So last July, I was invited to give a Ted talk, and it was last minute. I was very lucky, actually, and I was connected with the event organizer, and it was a cheeky LinkedIn message that said, hey, look, I'm open to speaking. If you got a slot. And he was like, oh, I don't know, like we've already got our bookings for July, he said. But hang on a second. Like I've had one person that isn't confirmed. So like hit me up in a couple of weeks and we'll have a conversation. So me doing the Adam thing, I was like, oh, it's been a couple of weeks, how about it? And he said, well, you're lucky because I have dropped out. Yeah. Otherwise I wouldn't have been presenting until this year. Yeah. So. So that was really good. I can't speak too much about it because it's still not been released yet. But the concept around TEDx is it's got to be an idea worth sharing. Yeah. So what I can say is it is a topic that's close to my heart. It does include AI. It's not all about AI, but it references technology and my belief that I think technology is going to enable everybody to build careers doing what they love. Yeah, I think the barriers are being broken down.


[00:39:32] Guest: Adam Shilton: I think the playing field is becoming a lot more level. So it walks through a framework for using technology to build a career from the stuff that you're passionate about. And I talk a little bit about my, you know, my story in the corporate world, you know, climbing the corporate ladder and all of that sort of stuff before deciding to, to go on my own and all of that sort of stuff. And yeah, it includes a bit of, I say they're not really regrets, as many people know, like, my degrees in music production. You know, so that was always an option for me. Do I, do I go full into the music or do I get a job and actually pay the rent? You know, so it's kind of a thought experiment that said. Right, okay. Well, if I had the technology that I have today and I'd gone down that music route, like, how would life be different? So I'm hoping that people enjoy it, you know, that there is that real sort of motivation to start thinking more broadly about ensuring that you are definitely doing what you're passionate about, because life's too short at the end of the day. So yeah, a blend of tech and my own trials and tribulations through my career, I guess.


[00:40:39] Host 1: Paul Barnhurst: So when you said loving what you do, I have to give a shout out. Love Mondays is the book that my training partner just wrote, Ron Monteiro. He goes through and gives a framework for everybody to, and shares his story about how he went from kind of hating a job to loving it now and wanting to help others. So kind of how you mentioned all finding something we're passionate about instead of. Gotta go to work. This sucks.


[00:41:00] Host 2: Glenn Hopper: Yeah. And that, you know. And normally this would have been our first question, but actually that we can come full circle here. And the follow up question I would have to that is so, Adam, I think you and I have known each other not quite two years yet from when I did the podcast with you. But you've been through a pretty significant change since we first talked. And you following your passion. I understand you left. And so you have your own full time business. Now, when we first talked, you were working corporate FP&A. Tell us a little bit about your business, what you're doing now and what your focus is with that because we did, we did just jump. We dove straight into the deep end and skipped all this small talk.


[00:41:43] Guest: Adam Shilton: I love jumping into the deep end. You know, as Paul said, you get three podcasters in a room. You never know what's going to happen, right? And so. So, yeah, I've been working for myself since July of this year, actually. So, the Ted Talk was one of the first things that I did when I started working for myself. And the completely honest answer is I don't have a defined product yet, and I'm hoping that people listening to this that are thinking about going it alone can have a couple of takeaways, because the traditional thinking with building your own business is I'll build a runway like you need 3 to 6 months worth of savings. That way, you know, if it all goes to pot, like at least you've got something to fall back on, right? I've got two kids, like four and two, like six months. Runway was never going to be an option for me, right? Kids are expensive, right? And I was very lucky that I had some opportunities to do freelance work, which has been really handy. But the aim for the business is, I guess, part education. You know, that's where the content and that's where the newsletter and all of that sort of stuff comes from. You know, I do want people to adopt the tech. I do want people to become more productive so that they can spend more time on what they love. And at the moment that is split between coaching, a bit of consulting work and obviously the online content. So that's where it is at the moment.


[00:43:05] Guest: Adam Shilton: But I've not got a paid for course or a paid for community or anything like that, because I'm still finding my way a little bit. Right. It's still very new. I've been doing this for six months. Right. And that's the other recommendation I make, is that you need feedback. Like, you know, don't ever launch a business off of maybe. Oh, I think that would be a really, really good idea. You know, all my friends tell me that's a really good idea. You know, good ideas are the ones that people pay you money for, right? You know, so find that problem to solve and go from there. So I'm still finding that problem. So I can productize a little bit more and I'm feeling my way. But you know, I've been enjoying it. And it's nothing against the companies that I used to work for. Like I did love my corporate career. And anybody who wants to stay in a corporate career, like absolutely fine, like there's nothing wrong with that. And businesses are becoming more open now to people building personal brands and being more vocal. I think it is the future. I think there is going to be a lot more employees that are also creators that have their own voice. But for me, it just got to the point where I just wanted to be the master of my own destiny, you know? And, whether we get there or not is still to be seen, but I'm enjoying it so far, and yeah, we'll see what happens.


[00:44:15] Host 1: Paul Barnhurst: I think we got a title for this episode. Glenn master Adam. He controls his own destiny. All right, well, we got a personal question for you. I think we'll have to wrap up here. We're almost at 45 minutes. Instead of you doing the random number generator. Are you picking the number? There's a question in here I really like, so I'm just going to ask it. If your Ted talk had a theme song, what would it be?


[00:44:43] Guest: Adam Shilton: Oh my goodness.


[00:44:45] Host 1: Paul Barnhurst: And you're a music guy. So I figured this is up your alley.


[00:44:48] Guest: Adam Shilton: The difficulty that I've got with pairing with songs is I don't listen to song lyrics, so I probably pick a song that has totally the wrong words. Probably More than a Feeling by Boston. Namely dancer, because the fact that it's like one of my all time favorite tracks.


[00:45:04] Host 1: Paul Barnhurst: It is a great song, I agree.


[00:45:06] Guest: Adam Shilton: Yeah, but I think, yeah, and I'll end on a cliffhanger. The ending line, which is the key call to action in the Ted talk, could lead quite nicely into more than a feeling. So yeah, I think yeah, I think that would work.


[00:45:22] Host 1: Paul Barnhurst: Any last questions you want to ask Glenn, before we wrap up?


[00:45:25] Host 2: Glenn Hopper: No, no, I'm just dying to hear the Ted talk. So I guess we're just eagerly anticipating its release.


[00:45:32] Host 1: Paul Barnhurst: I am well as well, but good news, Glenn. You no longer dominated the entire conversation. We righted the ship throughout this journey today. So you're okay if.


[00:45:42] Host 2: Glenn Hopper: You guys want to leave, I'll just sit here and talk to myself for another 20 minutes. We can just. Just bonus overtime. I got this Bonus episode.


[00:45:51] Host 1: Paul Barnhurst: Bonus time with Glenn. There's a new feature.


[00:45:54] Host 2: Glenn Hopper: Yeah. Normally I just go stand on a soapbox out on the corner with my In Disney banner and.


[00:46:02] Host 1: Paul Barnhurst: All right, well, I'm not sure who else will want to listen to this, but I had fun today. That's all I know. Did you have fun, Adam?


[00:46:08] Guest: Adam Shilton: Yes, yes I did. Thanks ever so much for having me. It's been a pleasure.


[00:46:11] Host 1: Paul Barnhurst: Yeah, it's been a real pleasure having you. Thank you for joining us. We'll go ahead and say goodbye until the next episode. Thank you everyone.


[00:46:18] Host 2: Glenn Hopper: Thanks, Adam. Great seeing you again.


[00:46:19] Guest: Adam Shilton: Cheers, guys.


[00:46:20] Host 1: Paul Barnhurst: Thanks for listening to the Future Finance show. And thanks to our sponsor, QFlow.ai. If you enjoyed this episode, please leave a rating and review on your podcast platform of choice and may your robot overlords be with you.

Previous
Previous

AI and ChatGPT Skills Every Finance Expert Needs to Automate Data Analysis and Reporting |Tom Hinkle

Next
Next

How CFOs Can Slash IT Waste and Maximize ROI with an Efficiency Mindset with Ben DeBow