Skip to main content
Podcast

Generative AI Goes to Accounting Class

Key Takeaways

Steve and Catherine ask BYU Accounting Professor David Wood how internal auditors are experimenting with generative AI and how AI is showing up in the classroom.

Show notes:

  • Read the paper David co-authored for more details of how Uniper used ChatGPT for internal audit tasks

  • Hear more about generative AI for accounting, finance, risk, audit, and ESG professionals at Amplify

Season 4, Bonus Episode: Generative AI Goes to Accounting Class | Transcript

Steve: Hello and welcome to Off the Books, where we surf the uncharted waters of accounting, finance, risk, and wherever else the waves take us. This episode is brought to you by Workiva, the one platform that brings together financial reporting, ESG, audit, and risk teams, which is not something you see every day, like a triple rainbow or an ostrich race.

My name is Steve Soter, accounting enthusiast and Diet Coke aficionado. I'm looking forward to debiting a great conversation in this bonus episode, and I'm so happy to have you with us. I'm also very happy, of course, to have Catherine Tsai joining us. Catherine, can you please tell everyone about yourself?

Catherine: Sure. I'm not an accountant or a Diet Coke aficionado, but I like venti soy chais and asking questions and learning new things. So I'm here to do more of that, especially when it comes to generative artificial intelligence.

Steve: Well, it is a great day for learning because we are talking to David Wood, who is the Glenn D. Ardis Professor of Accounting and Data Analytics in the Marriott School of Business at Brigham Young University. David, welcome.

David: Thanks so much for having me. This is a fun topic. I can't wait to explore it together.

Catherine: Great and, David, you don't just have an accounting background, but you've done research on generative AI. Before we dig into that, what is generative AI and how is it different from other artificial intelligence applications that accounting, finance, and audit folks might be using already?

David: So that's that's a great question. AI is obviously in the news and everybody's talking about it, but it's been around for quite a long time. And at the very basic core, AI is trying to mimic what humans do as a computer. And there's very different techniques to do that from very simple to more complex. Generative AI is the latest and really kind of the greatest thing that we've seen so far. And at its simple core, what it's trying to do is predict the next word in a series of words. So very, very basic. If I say a sentence, "The cat is" and it's going to say brown or yellow or running, depending on the context of everything else around it. So most basic level generative AI's generating predictions of what we should see in text.

Catherine: OK.

David: Now how does it differ from what we've seen before? So AI has been in the background of a lot of kind of accounting applications, you know, trying to match numbers or interpret text in legal contracts. But what gen AI is really the difference we're starting to see in accounting is it's so easy to use that I can just talk to it in natural language—so just like we're talking now—and it will understand what I'm saying and then produce output based on what I ask it to do.

Steve: David, how do you see reporting and audit teams using generative AI today? I know you've done research on internal auditors, and of course, I know just in our conversations you've been experimenting with it a lot. But how do you see that going in a practice like today, right now?

David: So gen AI, and the most common one we talk about is ChatGPT but there's Google Bard, but I'll probably use ChatGPT interchangeably with gen AI. Mostly what I'm seeing right now is experimentation. It came out last November, and everybody's playing with it and trying to kind of probe the boundaries of is this going to work or not? Can we get it to do different things? At its most basic level, simple tasks would be "Help me generate an email." So here's kind of what I'm trying to do. Generate this email or reword something. So for example, some internal audit functions we've worked with over in Europe are saying, "Here's the findings from our audit testing. Help me draft this into a report." So that's the most basic level. The other testing is starting to say, hey, can we actually get this to perform tests? So I was just at an accounting meeting with a bunch of professors, and on a panel we dropped in some cybersecurity data on passwords and said, "Generate for me the audit program and then perform all the tests for that audit program on cybersecurity risks around passwords." And sure enough, in 30 seconds to one minute it had generated here's the audit steps, here's the actual testing, and here are the results and then drafted a report. So that's where people are getting real excited about what's possible. It's not just, you know, okay, can I reword an email or fix some vocabulary, but can it actually start to do tasks that an auditor would do on their own?

Catherine: So that test, it sounded like it was pretty fast. How good were the results?

David: So on this one, I mean, it was a demonstration in a scenario. So we only had 30 passwords and it got every single thing right. So it said these are the passwords that are too short. These are the ones that don't have numbers and letters in capitals. These are the ones that haven't been changed in enough time. So it nailed that task. Now, there is the real challenge, and I think where most accounting functions are saying how do we scale this and will it consistently be good across multiple tests? So it's one thing to kind of do, you know, a one off and show it, but when you're starting to put it into production for audit where consistency and accuracy are so important, there still needs to be quite a bit of testing there to make sure we're comfortable with it.

Steve: One of the things that occurs to me, and I love the description that you provided of generative AI. Hey, what's the next word? You know, what word comes next? And of course, I think about narrative descriptions in financial reporting. Any thoughts about how companies might be using—either today or in the near future—generative AI to draft some of those disclosures?

David: You know, that's a great idea. And I think kind of the Holy Grail is could we take just the audit findings, click the the button, and generate the reports? So that's for an audit report and they're pretty boilerplate. But start to think about if you're a banker. What if you could have access and in plain language, query the financial transactions of a company and then get the results back? Or if you're an investor and start to look at, hey, "The company's put out all of this different disclosures. Summarize it for me very, very quickly." So I think there's going to be a lot of applications. There have been some people who've tried to use it for stock picking and seen some pretty good results. I don't know if they've actually put it into practice to make real money or if it's just, you know, hypothetical portfolios. But even that interpret what the CEO is actually saying from all of these conference call data and the other disclosures to tell me what it really means. So those are a few of the potentials that we could see. Now, could we actually get to something like a Workiva, where you put everything in, you click the button, and the financial statements are fully generated. I think a lot of people would like that, but I don't know how soon that will come.

Steve: When I start to connect those dots, I begin to think about MD&A in a 10-K or a 10-Q. I began to think about press releases and those types of narrative disclosures. And not that you would just, you know, hey, push a button, pull my data, write a narrative, but hey, get me started so that all of that manual work that I would've normally done, that may take days or weeks, you know, now is only taking, I don't know, minutes or hours or whatever. And then that just gets me such so much further ahead as a starting point. Do you see that being maybe the entry point of where these types of applications actually become used at scale potentially?

David: Yeah. So I think of our students, and you know, a student might type in an essay question and copy and paste it in and that's so easy to see. That doesn't make any sense. The smarter student—and I think this will apply to the professional—will take it to get started. So 80% of the work by the AI and then add their expertise for the 20%. So generate that first report, but then go back in and add the nuance or the additional understanding that the AI missed to really polish it and bring it up to speed. And you start to think about that. Now they have to draft the full report. If you could take out a big chunk of that time and just put that on polishing or refining, you suddenly could do a lot more refinement in reports or, in the case of the example I gave, student essays or that sort of thing. So I really do right now. I think AI is going to be a wonderful copilot. You know, it will do the first 50 to 80% of the work, and then the human can come back and do other things on top of that. So, yeah, I agree with you. That's the current future. Now, a few years down the road, I think it will probably automate the full thing. Microsoft's words, but we see it in the research community. We're using it in our research papers. Can you get rid of that blank page and just get started? So many people are so scared by the blank page. And if you can get some text down, ok, now I can get going. And it kind of prompts me to get started.

Catherine: I did want to ask about the case study that you and your colleagues worked on, where you looked at a European company. What sort of time savings did that company see when they used ChatGPT in their internal audit processes?

David: So Uniper reached out to us last March and said, hey, we're starting to experiment with this. Would you be interested in learning what we're doing? And we said absolutely. They took the entire internal audit process and laid it out and then said where could we use ChatGPT in each step? And they started with risk assessment. So if we're going to do an audit of, say, physical security, what are the important things we should look at? And it generates these ideas. And they said, well, that's really good. Let's just add one or two more to it. And then they said, well, let's actually do some audit testing. And sure enough, there again, drafting audit reports, kind of the things we've talked about. They were saying 50 to 80% efficiency gains in doing the same tasks, which if you think about internal audit, whose resource constraints suddenly if you can get a 50% improvement, you can double your testing and really start digging in and finding either problems or improvements that you could make in the organization. So they were all on board and after they finished their pilot tests, you know, they immediately put up job postings to get people to come help them say, hey, how do we scale this, and what else can we do with this cool new technology?

Steve: It reminds me of a comment that Grant Ostler made in a recent episode. We were talking about the IIA Conference, but he was mentioning, David, as you just described it, that companies are using that technology today to augment their current workforce to help kind of close, you know, a labor shortage. Again, not to replace necessarily those professionals, but to get them started to do, you know, 50% or whatever of the work. And it's having a real impact just in terms of, you know, them being able to have the resources to complete the vast amount of work that they have to get done.

David: Yeah. Well, and it's critical right now. You've probably seen the articles coming out about how big of a shortage we have of accountants, and this is going to help with that stopgap measure of where do we go until we can get more people to come into accounting? We need help or otherwise we're just going to burn everybody out. So I think AI came just at the right time for our profession, given how many people we need to get all this work done.

Catherine: So we've heard about the potential upside of generative AI for the profession. What about the risks that we need to keep in mind?

David: Yeah, absolutely. We haven't found a perfect technology. They all have drawbacks and potential concerns. The biggest one people are talking about—we'll set aside kind of the HR human side. Is it going to take all of our jobs? We can come back to that and talk, if you like, about that. A couple of the big ones are hallucinations. So what a hallucination is if you ask ChatGPT to generate something, it's, you know, trying to make these predictions, and sometimes it gets it wrong. And it will just make something up. So, for example, I could say, you know, "Tell me what's an important audit step that I need to look for in physical security again." And it might make something up for a completely different industry and put that in, and it's a hallucination or something that doesn't exist. So you have to be real careful with the output. You can't just perfectly trust it to say, oh yeah, this is, you know, this is the truth and the whole truth and nothing but the truth. But you have to go back and question it. There's also the question. There's a lot of questions around how do we train these models. You know, is there copyright infringements and that? That's all being sorted out. But then how do we use it ethically? Is it ok to use ChatGPT? You know, could I write this podcast with ChatGPT and not attribute it and just say it was all my own thinking or not? And there's a lot of discussions around what's the right thing to do when you use AI to help you or to fully do your task for you?

Catherine: That gets back to the idea of needing a copilot, in this case gen AI needing a human copilot to kind of steer it in the right direction.

David: Yeah, and I think most people right now are saying that if you use it, you should probably disclose it. But you need to take responsibility. You can't say, oh, I wrote that wrong, that's ChatGPT's fault. No. If you wrote it or if you got ChatGPT to write it, you're responsible for whatever comes out. So really need to check on sources and what it says, and take responsibility even if you didn't produce it.

Steve: David, do you have any thoughts on the privacy and security elements of this? I'll point out, and I say this because we've kind of gone through this just recently at Workiva, is being sure that we could deploy technology that was not going to learn on any of our company's data. I won't say the name of the company, but there's a reasonably large tech company who their terms of use had been updated so that they could actually use their customers' data and interaction and conversations and so forth in order to train its language models. They had subsequently said that they had never used that. But if what you're saying is that the best use of generative AI is the tool being so smart and so intelligent, it knows what you're about to say next before you say it, you can't do that without a vast amount of data, and that data has got to come from somewhere. But it seems like if it's not using current data to do that learning, then it's always going to be just two steps behind. It's always going to be just a little bit like not exactly what you were looking for. And I hope I'm describing my question correctly, but do you have any thoughts on that kind of odd paradox?

David: Yeah, you just raised about 50 questions in one comment. And so we'll try to unpack a little bit what's going on here. So there's the privacy concern of when you go to, say, ChatGPT or Bard and type something in, it's getting sent to that company, and then they're doing something with it and sending back a result. And so if I copy and paste in, you know, the salary data of my company, trying to do some type of analysis, is that safe? Can everybody now see that? That's the first, is the simple one. Companies are working very hard on, on building a full silo so that the data is safe, that it's encrypted, that nobody else can see it. So, for example, if you go with Microsoft, it'll just be their entire Azure stack. And just like all of the other data that's stored on the cloud, it'll be secure and private. So that part is being dealt with.

But then you bring up the second part, which is the the harder one is once you put that information in, do you allow the model to learn based on what you told it? So for example, I might say I need to figure out a tax question. Can I deduct this expense? And if it comes back with a result, it may be right or it may be wrong. If it's wrong and I correct it, now the model can actually get smarter. And so if I keep feeding all of this information and data in and I make these models smarter, who owns that information I put in there? Is it ok that ChatGPT or Google Bard uses that to get their models smarter or not? And one of the things you're starting to see in the industry is people building models that are completely inside the organization. So one of the Big Four accounting firms, for example, is trying to build a tax model that only is based on everything they put in, not their clients' data, but the questions their professionals ask so that they can keep training it to answer that question correctly next time, next time, and next time. But it's not being shared with everybody else. Once you start getting to inputting customer or client data, that's a very tricky one. And as you mentioned, contracts are starting to reflect that. And I think there's gonna be a lot of negotiation around whether we allow this or not or how much do we pay to allow somebody to let me use their data. So nothing has been solved in these issues. It is the big discussion around how do you train these models?

Steve: Well, reminder to the Off the Books team to get all of my statements approved with Workiva legal before we drop this episode. Now, I'm a little bit nervous about that.

David: Hey, it's no worry. Just say ChatGPT said it and you'll be all right.

Steve: Hey, there you go. There you go. I know that we've got some other questions for you about the future of AI. We know that you're doing work with your students, which creates a whole other set of issues. I do want to ask. I think about the traditional view of digital transformation, where you're taking what was a manual process—from my perspective in accounting and finance—you're now digitizing that using technology and other tools. Does that play a role in this? Because I'm thinking that a lot of organizations who are now experimenting are starting to see some potential uses for this, but they may have to look back and say, well, shoot my data sources or the way I capture a process or handle my data or whatever might not be prepared to take full advantage of AI. And so should companies also be thinking about their digital transformation efforts with respect to being able to maximize the potential of gen AI in the future?

 

David: Absolutely. So you think about what brought us to this point that this is even possible. And so we've increased in computer speed, we've increased computer storage, and in that storage we've been, you know, storing away massive amounts of data. And it's largely dark data. Nobody knows what to do with it for all this time. Now, this technology comes in and sits on top of that. Well, you can't train a model. You can't use the model if you can't train the data. And so somehow either your data warehousing needs to be in a way that you can apply that data to gather insights. It's the same thing on a small scale. You know, if your data is all messed up in Excel and all in the wrong cells, you can't do a formula on it. You've got to organize it and understand it and do that. Now, the thing about gen AI is it does a better job of even helping sort and do some of that data cleaning to help you get up to speed. But you're exactly right. The better your data is structured and formed and understood and tagged, the easier it's going to be feed it into models to then learn from what's in there to gather insights.

Catherine: Well, where do you see generative AI going for accounting and finance?

David: You know, this is a $1,000,000 question. And if I knew the answer, I would be a millionaire. It is a really tough question. I'll say a couple of the principles that guide my thinking on this is one, most people view accounting as a cost center, and so they're not trying to maximize how many accountants they hire. They're trying to minimize how many accountants and get them to work harder in that. So I think we're going to see as much automation as possible to streamline that process. Now, when I say accounting, we really should have two words. That's the back office compliance function, those type of activities. And by streamlining and automating that, I think the accountant of the future is going to be focused more on how do I add value. So rather than just recording the past now, what does this help me know about the future and the predictions and what data analysts do? So the role of accountant I think will shift much more into data analyst and providing guidance on, hey, here's the direction, here's what we need to do to have new opportunities. So anything compliance oriented, I would if I was the CEO, say, hey, how can we automate this with either AI or other tools and then free up all that time to start saying, hey, now how can you help me see the future and what should we be doing and making business decisions that will help the organization?

Catherine: Yeah, if that's the case, what do you think accounting and finance professionals or even students need to do to get ready for that kind of a future?

 

David: Yeah. First off, you need to be brave. There's a lot of change going to come, and it's going to come at a very rapid pace, is my expectation. So first is not be scared of it. Jump in and start playing with it. You know, day two of our class now in data analytics is going to be on prompt engineering and talking about these models and helping students see, hey, how can you use these to learn differently and improve how fast you learn and what you learn? And I'd say that's the exact same thing for a professional, is what can you do now that you couldn't do before? It's a calculator. Rather than doing longhand, you can do a calculator to answer those questions. A CEO of one of the Big Four accounting firms recently said to their people, anything that ChatGPT can do, we want you to have it do it. And so they're trying to push all of those tasks. So then once you start doing that, you'll see the power and the limitations. And once you find the limitations, that's where the human adds the value. You can't interact with the ChatGPT. It's not the same as building relationship where you trust a person, and then you're willing to say, ok, I don't know what the answer is, but I know you're going to help me get there. And those type of activities, we're going to find more and more what the human does well. And that's what we will do. And what the computer does well. We'll let the computer add up all the numbers and do the matching and all of those types of things. So we don't have to do that boring type of work anymore.

Steve: David, do you think about the risk of the dumbing down of the profession generally, and I ask this question because, you know, you or I, who have been in the space for a while and, you know, I'm no professor, so let's be clear on that. But I could probably tell a pretty good ChatGPT-generated 10-K disclosure, for example, from a bad one, and I would probably be able to very quickly identify, hey, you're missing some pieces here. And I feel like if I were still doing SEC reporting today, for example, that could be a really big help to me. If I fast forward for 20 years when that starting point was always done by generative AI, is there a concern that those professionals in the future might not have that skill set? And I suppose that you could go back to the calculator or the printing press or however many, you know, modernizations that have been in the world and point to the fallacy of that argument. But I'm wondering if you could unpack that, because that's something that I think about a lot.

David: And I have an opinion on this, and I'm going to say there's others on the other side of the spectrum of this opinion. And I think you bring up history, and I think that's the best way we can predict what's going to happen. So the computer comes along. It makes writing a ton easier. My grandfather earned a PhD and he did it on a typewriter, and grandma would help him retype the pages when he had one misspelling and do all the formatting. Did the computer make him dumber as a scientist? I don't think so. He was able to do a lot more. He was able to type faster, revise quicker, and do things better because of the computer. And that extends. You start thinking about all the science that we put AI on to look at new molecules and generate data and understanding in new ways. I think we're going to see the same thing. I'm not aware of many technologies that have made us dumber, aside from maybe some of the social media websites. They seem to not necessarily increase our IQ or ability to work together as human beings, but the other technologies underneath, they help us be more productive. And now instead of looking at, for example, imagine auditing instead of taking a sample, what if AI can help me look at the full population? So now it's not I'm going to stumble across and maybe find some inefficiency or mistake. I can find all of them, and I can find all of them very quickly so now I can fix them.

Now, on that, I will say the other side is that how do you train somebody to come in at what now is a manager level when they're a staff? That's hard to do. And so I think education is really going to be in the crosshairs of how do we upskill people a lot faster so that rather than producing content for a few years and kind of learning the nuance, they can see content and review it and understand what needs to be there. I don't have the answer for that piece yet. That's what I'm trying to work on is how do I help students get to where, you know, five years ago it would take two years to get to that place. How do I get them to that place in six months and just speed up their learning process?

Steve: But it sounds like as a professor, to use the calculator example, in a math class, it sounds like you're not in favor. Hey, everybody, put your calculators away. I don't ever want to see them in this classroom for the entire semester. Is that a fair way to describe your thinking?

David: That's absolutely. I would say as a professor, I'm going to say, okay, right now put your calculator away because you need to learn some basics. Now, get it out and do those basics. And it's going to be the same thing in accounting. You still need to know that debit goes on the left and credit on the right. But do you need to spend as much time learning that and practicing that? Or once you get that fundamental understanding, we then move on quickly to the next piece. And I think that's going to be the hard part is saying what stuff do we still have to teach and help people learn so that they can move to that next step and not forget all the important pieces that are underneath that. I don't think we have that figured out, and I think that's what we're going to be figuring out over the next couple of years as educators.

Catherine: We have more questions for you about generative AI in the classroom, but we'll take a quick break first.

Andrew: Amplify is the conference for accounting, finance, ESG, audit, and risk professionals. Join us in Nashville, September 19 through the 21 for workshops, keynotes, and the entertainment Music City's known for. Register at workiva.com/amplify.

Catherine: We are back with David Wood from BYU talking about generative artificial intelligence and the accounting and finance profession. So we were talking a little bit about the use of gen AI in the classroom. How are your students using it so far?

David: You know, it's really new and so you see a full range of things. You see students who don't even know what it is and haven't even tried it to students who are doing everything with it and everything in between. And so it really is an emerging tech. And just like emerging tech, some see it and jump in and others, you know, are oblivious and don't even know it exists. The real questions right now around professors are how do we use this? Do we just ban it? Do we incorporate it? How do we incorporate it? Where is it going to be helpful, and where is it going to be hurtful?

Catherine: Has ChatGPT been taking any of your students' exams for them?

David: You know, on some of my exams, I tell them they should use it. So I've taken the position in some settings I've tried to separate my content into things. These are things you should be able to do, and these are things you should be able to know. And if you need to know it as any business professional, you can't stand in front of a CEO or an audit committee and say, let me look that up. And so helping students realize as a professional, you need to know this, and then they get on board. Then it's a test that's, you know, memorization and seeing if they can do that. But on the other side, the stuff they need to be able to do. You don't have to memorize that. You have Google and you can look that up. You have ChatGPT for writing code or program. And there I'm embracing it because I can tell the students my expectations now of what you can do just went up. It'd be like, you know, having a race and saying, okay, do we allow Ironman in the race or not? And if you allow Ironman in the race, then you're going to see the times be a lot faster. And if you don't allow it, it's going to be a lot slower. So where do you want that extra boost in help that, you know, the Ironman suit in what you're doing? And where is it important that, no, this is a place you can't have that Ironman suit.

Steve: Is there anything specifically that either you or other professors are thinking about to help students writing ability specifically, since that seems to be, at least right now, the largest efficiency or potential efficiency from gen AI?

David: So I would say that even before writing coding, I think it's having even a bigger effect on writing and generating code. Your Python, your SQL, all of those things. But to your question on writing, there's a couple of thoughts. One is, again, ban it and let the student work through this and try to figure out how to do this. The other side, which I'm a little bit more proponent of, is have the student use gen AI to clean up all this stuff that I don't care about as an accounting professor. I don't want to spend time telling them, don't use passive voice. You know, that's an incomplete sentence. Fix that grammar. I want them to see can they really dig into IRS standards and pull out the relevant and important code and then communicate that in a way that's convincing? So in this type of setting, you say, yeah, I expect your essays to be written sparkling clean in terms of grammar and doing all that right. I'm really going to dig into the thought process, and by doing that, it saves me a ton of time on the stuff that I don't add a lot of value as an accounting professor. An English professor, that might be a different answer. They might need to help teach some of that other stuff. But for me, I can focus on the things that I think they really are coming to me to learn, which is how do I be a great accountant?

Catherine: How well does ChatGPT do on accounting exams?

David: So that's where I first was introduced to ChatGPT. I found out about a week or so after it came out, and I saw that this is kind of cool. Let me play with it. And you know, I put in some silly one, write me a poem or written a couple of jokes, and I was like, wait a second, could this answer accounting questions? So I copied a couple accounting questions. I put it in it got the first one right. It got the second one right. Then it missed the third one. I'm like, okay, this is interesting. It got enough, right? That's like, that's kind of scaring me. And yet it's not perfect. And so we got a collection of over 300 professors together and had them put in all their test questions. And so this is ChatGPT 3.5, and we found it got about half of them right, a little bit above half. So students dominated. They were great, and all the accounting professors like okay it's no big deal. False alarm. The day that paper was accepted for publication, ChatGPT 4 was released, and so we're like, wait a second, now the game has changed. And we went back and we said, well, we could do the accounting questions, but now instead let's put in the professional exam questions, and we use training material for the CPA, the CIA, the CMA, the Enrolled Agent exams, and with ChatGPT 4 and some additional tweaks, it passed all of them at about 85%. So from one iteration to the next, we went from, you know, a failing college student to now a certified professional knowledge level. That caught my attention. And that's why I spent all summer playing with this and trying to figure out, well, what do we do next? Because if it can learn that fast and improve that fast, it's going to have big ramifications for the profession.

Catherine: Wow. Sounds like we'll need to have you back sometime to see how the world of gen AI has changed.

David: The challenge is that will probably next week because every single week it's like some new thing comes out, and it's just moving so, so quickly. I'm trying to get fall classes ready, and I'm like, oh, what do I put in this place? And I'm like, well, I'll have to wait til the night before.

Catherine: That's right. Well, we appreciate you coming by. And I think we've hit the point in the program where we get to our closing question of the day.

David: All right. Thank you for having me out. This has been a lot of fun. And I look forward to hearing from people that listen to the podcast and suggestions in what they're seeing in practice.

Steve: Well, that's a great call-out. If you're watching this or listening to this on YouTube, you can certainly put comments there. But as we'll say, always, always say at the end of our episodes, definitely reach out to us OffTheBooks@Workiva.com.

The question, David, we want to end with seems fairly apropos. If you could have technology make one decision for you for the rest of your life, what would that decision be?

David: You know, you mentioned this question just a little bit before the episode started, and I'm still drawing a blank. And I think that's where AI is at. Where do you put this? So I actually put it into ChatGPT, and it gave me the answer, actually think about my health, my diet, finances, learning. I would love it if it could come up with fun things to do. So when I go home and the kids come up and say, Dad, what are we doing tonight? Or What are we doing this weekend? If it just said, oh, here's a thing that everybody would enjoy and be happy doing. If it could come up with those recommendations for either vacation or an evening activity, that would make my life so much easier. How do I entertain my kids and someday grandkids? I think that would be just a wonderful use of AI.

Steve: Or how to answer closing questions on podcasts. Who knows?

David: I hope to be doing one of those two activities more than the other for the rest of my life.

Steve: Well, for your sake, I hope so too. Catherine, how about you?

Catherine: I would want to ask it gift ideas when I need them in a pinch. Like what can I get for the anniversary? What can I get for birthday?

Steve: Smart. Smart. Very smart.

Catherine: What about you, Steve?

Steve: You know, I went a lot of different directions on this. And actually, this is the point where David's going to regret he joined the podcast. At first I was thinking about what to wear every day and look at my calendar. So as it happens, this is an audio-only version. So for all our audience knows, I'm wearing a tank top right now, but I thought, okay, that would be useful. Then I started to think about a practical decision that I have to think about every single day, and you'll crack up here, but I introduce myself as a Diet Coke aficionado. I generally have one at lunch, but I like to mix it up a little bit. And sometimes whether I'm putting Dr. Pepper, Diet Dr. Pepper, or if I'm a Chipotle that doesn't have Dr. Pepper, it's Mr. Pibb, I give some serious thought to what kind of soda I'm going to drink at lunch. If I could have technology just handle that for me and it knew the ratios and the flavors and everything, I'd be a very happy person.

David: It will eventually handle that, and it'll probably just inject it directly into your bloodstream in a couple of years.

Steve: Well, and I would love that, minus the fact that I love the way it tastes. If it goes straight into the bloodstream, you know, I miss out on the delicious taste. Well, David, it has been so great having you here again. We really want to thank you, David Wood, the Glenn D. Ardis Professor of Accounting and Data Analytics at the Marriott School of Business at Brigham Young University. And we also want to thank you, dear listener, for surfing along with us. I'm Steve Soter. That was Catherine Tsai, and this has been Off the Books presented by Workiva. Please subscribe or leave a review and tell your buddies if you like the show. If you're watching this on YouTube, as we mentioned, please put a note in the comments. Tell us what you're thinking about gen AI or feel free to drop us a line at OffTheBooks@Workiva.com. Surf's up and we'll see you on the next wave.

Off the Books Bonus Episode: Generative AI Goes to Accounting Class with guest David Wood

Duration

35 minutes

Hosts

Steve Soter, Catherine Tsai, David Wood

You May Also Like

Online registration is currently unavailable.

Please email events@workiva to register for this event.

Our forms are currently down.

Please contact us at info@workiva.com

Our forms are currently down.

Please contact us at info@workiva.com