Podcast: AI Review Tax - A Path To Burnout
In this episode of Workplace Economies, hosts Jon Kent and AJ (Adam) dig into a concept Jon has coined the "AI review tax", the hidden workload burden created when AI-generated output bypasses proper review and lands on the desks of already stretched senior employees. Drawing on original research and a companion article published on Workplace Economies, they examine how the widespread removal of junior roles, often justified as an AI-enabled efficiency gain, is in fact destabilising the very workflow structures that make organisations productive. Far from reducing the burden on experienced staff, they argue the indiscriminate adoption of AI tools is concentrating unmanageable review work at the top of the org chart, fuelling stress, poor decision-making and burnout.
The conversation broadens into wider territory: the collapse of the junior talent pipeline, the self-defeating logic of exponential growth culture, the limits of AI as a substitute for human context and judgement, and the emerging figure of the solopreneur, empowered but ultimately overwhelmed by AI productivity tools.
Jon and AJ bring genuine founder perspectives to the debate, drawing on their own experiences building software products, and conclude with a characteristically direct assertion: nothing fundamental has changed in how good work gets done. Organisations that understand this and keep humans meaningfully in the loop at every stage will outlast those racing to automate their way to the finish line.
Listen
Show Notes
- Jon introduces the concept of the AI review tax, based on his Workplace Economies article, and frames the episode around why companies cutting junior headcount are creating more problems than they solve.
- The hosts define “work slop”, a term originating from Stanford’s Social Media Lab to describe unreviewed AI output that gets passed up the chain unchecked, creating downstream workload.
- Jon walks through the traditional software development review cycle (feedback, design, build, PR review, QA, staging, release) and explains how AI disrupts this structure by producing code that bypasses established checks.
- HBR research is cited, referencing a UC Berkeley researcher who spent eight months embedded in a 200-person tech company, finding that non-developer staff were generating AI code and passing it informally to seniors.
- AJ raises the broader applicability beyond software: marketing, journalism, legal, and other professional sectors are equally exposed to AI-generated ‘work slop’ and the review burden it creates.
- The hosts discuss DBS Bank cutting 4,000 roles while creating 1,000 AI specialist positions, and debate whether large-scale workforce restructuring around AI represents strategic planning or short-term cost-cutting dressed up as innovation.
- An EY survey of 500 senior leaders is referenced with more than half reporting they feel they are failing to keep pace with AI’s rapid advancement. The hosts suggest that the machine is running faster than anyone can manage.
- Jon and AJ consider the solopreneur thesis attributed to Sam Altman that AI will enable one person to run a large organisation, testing it against their own experience of managing multiple simultaneous AI workflows and finding it falls short in practice.
- The concept of AI as a ‘reactive listener’ responding to please rather than to understand is explored, drawing on FranklinCovey’s principle of listening to understand versus listening to respond.
- Jon closes with a call for maintaining human oversight at both ends of any AI-assisted workflow, arguing that junior roles need not disappear but evolve, becoming the prompt engineers and first-draft producers of the AI era, with seniors retaining the review and decision-making function.
Podcast Transcript
Jon:
Hi everyone and welcome to a Workplace Economies podcast with me, Jon Kent..
AJ:
Hi, I'm AJ, Adam, and yeah, welcome. It's going to be an interesting one, this one.
Jon:
Well, I hope so. So basing this off one of the articles, that's one of the first articles for WB that I've written, which is on a term I call the "AI review tax", and also specifically why you, with this new tax that exists, you need to not be cutting your junior roles, you actually need to be hiring more if possible. So the whole premise behind it is, to quickly give a synopsis, I guess it's that AI gives out quite a lot of "work slop", which is the term, I think it was Stanford said about the work that AI produces that no one checks and then passes up the line, and then other people have to work it out. So it actually creates more work. But what that's leading to is senior roles having to do the review themselves and eventually burning out. So in other words, AI is not making people more productive, they're doing more, but they're actually achieving less. So that's, I guess, the synopsis. Adam, you've quickly read the article. Yeah, what were your, what were your thoughts? What were the initial things that stood out?
AJ:
A couple of things, really. First of all, I'm probably one of these people that are causing the burnout in the sense that I'm vibe coding something and you're checking it. So I'm pushing all the pressure onto you to make sure my software is really good. So I get it from that point of view, because I've experienced it. But no, in actual fact, I largely agree. I think it's a very interesting premise that this is one of the hidden outcomes of people thinking that they can replace entry-level people, the grinders, the worker bees, as you called them earlier, with a piece of software that doesn't understand the outcome, it just understands the task.
Jon:
Yeah, I think the same, so I did do some research into this. It actually, it was prompted because it's the review of what AI produces is something that I've been harping on quite a lot about to anyone that stays still long enough. Because I have seen a lot of even like, you know, professional developers and companies where they're treating what AI spits out as completely golden stuff, and just, you know, in the development process - and we will talk a lot about development here, obviously, because that's, you know, what are both of our businesses are about software development.
But it's, you know, usually you have, in the process, you'll have the feedback or the context being provided either by stakeholders, which could be customers could be senior management could be, you know, from lots of different people could be from feedback, and analytics, you've got that context, you then talk to the developers and create a come up with an idea of a solution for that problem, the developers go away and build it, that then gets checked through a pull request through PR. By another developer to make sure that the code actually is good. So they're peer reviewing the code that then gets released to staging environment, which QA will test. And if any of these stages it doesn't work, it goes back to the developers or and sometimes actually even gets thrown out and goes back to the planning stages. And then eventually at the end, you get to: yes, it's released software.
But what I've seen is it acts a bit like an AI acts like a really excitable junior developer, where it goes: I think I've got a solution for this which is going to be absolutely amazing, and by the way it also obviously tells you that you're amazing for thinking that this is a problem in the first plac - and look at this I've come up with a solution it's usually in isolation, like a junior developer, as well it might go: I'm using all of these new different tools even though you've actually solved that problem elsewhere in the code base. I'm going to use all of these tools and, yeah, I've done it and isn't this great and I've created a thousand lines of code and then people are going "oh yep that's good", accept it and that's then rolled into their production environment. So I keep saying you know the most important thing is that you proofread i.
What's happening now is that there's an HBR research that stated that the AI isn't reducing the workload it's actually intensifying it for certain people, because in that process the people at different stages are also doing the coding - because it's fun - because you can type in a prompt and you can see a result you go oh this is brilliant so you hand it off to a developer or to a senior who then reviews it and is then trying to backwards engineer it.
So what we've talked about quite a bit before is, and I'll finish my monologue in a second, what we've talked about before is the fact that, when we're building something, you need to make sure that it's within the context of the projects that you did; you're building everything's in the same for example the same coding language; it's built with the same structure; you've separated things out so that if one thing breaks it doesn't affect anything else; and you're coding things once... and the AI isn't doing that. So the developers are trying to backwards engineer that and that's also in development worlds causing huge rifts between those that are using AI and those that are going, "there's no way an AI can replace a developer!"
So it's this tax is leading to burnout and is leading to bad code but I'm saying that there is a tax it has to be borne by someon, and actually in a normal situation you would have the juniors doing that work and passing their first draft to the senior to review completed stuff rather than work slop.
AJ:
But that's an inward investment, right, because what the AI is not doing is then necessarily and although technically it does it's not utilising that that it's gaining information it's not gaining knowledge. Whereas a human doing that would gain information and therefore knowledge, and be able to apply that later on. In a truest sense, is AI doing that, and also so... that's part A. Question, part B, is you've spoken up you know both of our businesses are all around development but this is true of not just development I mean this is look at marketing this is one of the big areas that I've seen so many adverts on meetings saying you know for £26 pound 99 a month you can replace your whole marketing team. I'm sorry I'm calling bullshit on that!
Jon:
Oh, there's so much of that... yeah!
AJ:
So this isn't restricted solely to to developing world right.
Jon:
No, absolutely and and it's um you're so well the the question that I can remember I'll answer which is that it's not just the development world um it's it's you know whatever it's spitting out you need to check because um every every great every I mean I'll go really big scale now everything that's worthwhile doing it takes time and patience and needs to be refined to make something good. Everyone's first draft that everyone anyone that works on books or software or marketing or you know you never would go into a marketing meeting and someone goes: oh I've got an idea they go oh it's um in the Cadbury's marketing department they wouldn't have gone you know what guys have got a great idea a gorilla playing the drums to to Phil Collins and everyone goes brilliant done let's walk out. there would have been so much interation done over that. And that's what you need you need the the slowly improving and iterating and providing more context to the argument so yeah it's definitely not just development.
AJ:
Yeah, I mean it's the the access the speed and access to ideas but it no one's actually judging and that the slop that's being thrown out there's no one's actually thinking is this actually any good yeah that's what we're saying right is the bang it out because banging out is the most important thing so again creating a quantity over quality smash it out and somebody else somewhere up the line is going to deal with it.
Jon:
Well yeah so that's that's that's what's happening but what we're what I'm trying to say is that we always need the iterations we need those first steps the blank canvas to something before you go from something to something great.
AJ:
We're leapfrogging... we're trying to leapfrog failure yeah
Jon:
Well yeah and and what's what's happening is that because um companies are getting rid of juniors that job is then being pushed onto the seniors who are time poor and if they're so well it's actually it's two things it's that work's being pushed onto the seniors or other people are now moving into that role of the sort of junior because they can and it's fun and because they're going well the senior's too busy so I'll get this work done and then I'll hand it off because they feel like they're actually being more productive and helping the seniors and I think that's what's happening and I think that's what's happening but actually they're making their lives much worse and
AJ:
There's quite from you know listening and reading the article um this morning there's quite a lot of data that and research that's actually going into this right this is a serious problem.
Jon:
Yeah, yeah absolutely... so um I'm just uh seeing if I can remember these points off the top of my head...
AJ:
There's a huge amount of uh fact and research that you've pulled into the into it actually from from reputable organisations as well these aren't just kind of scraped across the internet from some dim and distant thing from back of beyond...
Jon:
No exactly. So I mean you know some of them obviously aren't um you know um pools of of thousands and thousands and thousands of developers but I think it's still these are problems that are being felt um So in the HBR um article it was talking about um a uh university college Berkeley researcher spending eight months um embedded inside a 200-person tech company and they were the ones that discovered that actually people with other roles were doing the work um of prompting AI and producing code which was then being passed up to a senior in some form and of course that's not just well here's the work put it in to the database as you well know um - and this is more development um specific - it's we've got a single file of code how does that relate to everything else how do I even get that into the code base in the first place. So there's there's work that goes on there. And of course that then doesn't happen in the traditional project management of building software where it's quite structured it would then be here's a slack message I want this to be on the website to a senior developer and then they would have to do it.
AJ:
And what we're also seeing is the Venn the overlap the Venn diagram of of work tasks for the senior manager the person who's got to make the decision ultimately because there are no juniors in these businesses, or few of them, their scope expands massively into well hang on a minute I'm now because I can because of AI to do the tool I can just go off and do that problem yeah just distinctly related to what I'm doing but I can now go and do tha. yeah
Jon:
Yeah, absolutely yeah and and it's it's it's the idea that you you feel like you can be productive about this and you do the I guess you do the fun part which is the prompting and the generating something that that's where the AI tax comes in that someone that you know taxes we all we all know we have to pay them don't necessarily like paying them sometimes we feel like we're paying far too much but you have it's something that has to be uh um paid in order to get something that's good at the end and what I'm finding is that the um the job roles that aren't experienced with that are creating a higher tax it's a bit too much um comparison to into um conservative views um but um that's not that wasn't the intention but yeah that the problem is that then the because that tax has to be paid the bottleneck for productivity is being paid at the top and it's being created at the top.
AJ:
So that surely must be I mean obviously that there's an obvious um you know if it's hitting a bottleneck that slows everything down rather than a quicker decision so then you know if you're then saying well we need more managers I mean that's arse-about-face surely but because all this work has been generated by AI we need more managers to make a decision about it but they're creating the bottlenecks because they're doing 10 times the work they need to do and decisions are not being made or poor decisions being made because they're being made slowly or or not enough knowledge about the subject matter or...
Jon:
Well, yeah, so I mean the problem is that if you're the one that's reviewing um uh what AI is done and you don't have the skills to be able to review it properly and accurately which is that's what the seniors were doing they were getting pull requests from junior developers and reviewing it and going yeah you've made some mistakes here you're including too much in this file you're you've got a loop here that doesn't end you're not catching what happens when it goes wrong here and they then pass it back to the junior so in that typical um production the junior then learns and then can can do better work but because it's adhock, that review and that feedback Loop is being missed so there's no let up for the senior person.
AJ:
Yea, yeah, yeah. I guess it comes back to what is it from from writing the article what is it you think we're trying to achieve with replacing these juniors? Is it purely just we think we can save money on hiring juniors? Because for me if you look at one of the fundamental um foundations of a business is having people in there that are replacing you know as the skills rise and people rise with the skills and become you know they become leaders and managers and things like that because they can make these decisions based on experience, you replace that backfill the gaps created with people who are needing to learn this information and that's how the cycle regenerates itself, you know, everyone rises you know. et cetera et cetera. So but if we're not backfilling that what happens what what where is this what's this for?
Jon:
Well and and that's the um we'll talk about it uh briefly I think in the next one um about the the rush for speed and to make decisions quickly on these things because there's a oh we can do all of this with the technology and no one's actually sitting to think about that so if you don't have the people coming through the company the company's going to die right because there's there is no one to actually do the work eventually or no one's skilled with the context of the company and what it's trying to achieve at some point And this is where the unemployment figures for juniors and new people into the workforce is really quite terrifying because they won't have the skills. They won't know about how businesses work because they're not getting those jobs. But the idea behind the companies going, oh, we don't need them because AI can replace it is a fallacy because they actually do need those people. Those are the people that are meant to be doing those initial first drafts. They, you know, you teach someone junior and go, what we're trying to achieve here is X, Y and Z and go off and try and do it. And they will then draft it up, hand it to you and you go, that's absolute rubbish. These are the ways maybe these are the ways that you improve it and you go back and redo it.
You know, that's thinking about the legal profession. You have a training contract for two years. Where you're drafting letters and advice to customers and then you hand it to your senior associates or your partner, who is a much more senior person than you, reviews it and says, you've got three sentences here which say the same thing or don't say anything. You need to cut that out. We need to be specific about this. This is what we're trying to do. You know, that's what training is all about. And then that person grows through the company. And if they stay within the company, the company naturally grows because it can then take on more work. You know, this is this isn't it's actually a really revolutionary idea that you have the genius doing the initial draft. It's it's been there the whole time.
AJ:
And then one of the things that there are other people that think like you as well, I mean, you quote here, Chris Eldridge, CEO of Revolt, saying moving junior roles will starve the internal talent pipeline. I mean, you know, you talk about sales pipeline, but there is a talent pipeline as well. Anybody in HR will tell you that, that you need to have these people that are... it's not just a case of worker bees doing all the shitty work that you don't want to do. It's doing work. You need to make decisions on here. You've only got a certain amount of time, you know, within the workplace to do your work. And the best use of that is to you're at that decision making level. You're at that determining strategy, direction. The junior person who's come just straight out of university, whether it's in marketing, whether it's in journalism, accountancy, lawyer, developer, doesn't have that scope to be able to make those decisions without seriously fucking up.
Jon:
Well, yeah, exactly because how could they? Even though, you know, a 20-year-old thinks they know. Yeah. They know everything about the world as I definitely did when I was that age, you know, they, they don't speak them. Yeah, exactly. Um, so you, you need those people with the context to be able to share that knowledge and no matter how much you can create guidelines and give an AI context, it is limited. It doesn't, it can't think in the way that we do because it's, you know...
AJ:
... you'd end up almost writing the prompt that will take you almost as much time to to give it the back catalogue of what it needs to understand that it would, that a junior developer or a writer or journalist has spent learning at university to understand the full context of the, you know, the start with the why, why are we actually doing this? This isn't, this is dehumanising the workforce in many ways.
Jon:
Yeah, but it's not dehumanising it in a way that, um, is, you know, it's not like, um, putting machinery into warehouses where it's so. Yeah. It's, it's not a human doing it anymore. It's actually a very precise machine, which means that there is a huge benefit, you know, with a lot of the stuff that we're talking about and it's not just development, it is marketing. It is, um, and you know, it's basically creating, um, work slop, which, um, yeah, was a term, uh, coined by Stanford, uh, I've got to hit Stanford Social Media Lab, um, which is just, you know, yeah, the, the work that it's producing looks looks great, you know, and it might be, well, it's kicked out a, a whole, um, article, but actually that could have been written in three words and it's doesn't say much. It's just lots of fluff and that's creating, uh, that's doing the same thing. It's, um, creating a problem for the person that you send it to.
AJ:
Yeah.
Jon:
Which is, is not good.
AJ:
This is very interesting because, I mean, you, you, you talk about in the article that the data's not on your side at the minute, the fact you use the example of, um, what was it? Companies are cutting junior headcount is spent at GitHub reports. Copilot, uh, sorry, I beg your pardon. Where am I? There was a stat you talk about where DBS bank cut 4,000 jobs, creating a thousand jobs who are new AI specialist positions. And, and I think I read somewhere that is it openAI that has just employed a hundred people to train AI to do the basic functions of banking. So basically 75 % of the workforce goes to reemployed with a different skillset. Um, to do, I mean, is this just a giant experiment and they're going, well, fuck it at the minute, we'll see how it goes. And we don't really care. So, because it's just, you know, well, if we need to, they'll be there again and we're just reemploying again, you know, these, these automatons, these ants... they go off and do something else.
And, you know, you can see this at universities going, well, no, one's doing that anymore. What's the point of doing that course or people go with, there's no point in having that job anymore because basically they're saying, well, it's not needed. AI is going to do it. So the courses stop and all of a sudden we've got this whole gap when they go, oh, bugger. So it's almost like they are forcing the issue and turning it into an AI-only because they've got another option. So they're creating that it's just, it's a self -fulfilling prophecy or a self defeating dichotomy.
Jon:
It's, it's short term thinking and planning and it's, it's in a rush to, I mean, I think probably quite a lot of it is a rush to be, you know, the, the best. And doing the, when I say the best, I mean, being the company that doesn't get caught with loads of employees when you don't need them who are all sitting around. But the, the point is that if you, if you need the more people later, you have to have them working in your environment so that they know what you've done because AI is moving so fast that, you know, one of the things I was going to do before Christmas was build an AI development course. So I thought this is, this is really quite powerful. I thought I could create a teaching course to explain to people wanting to learn how to use it and wanting to code how to do it in a, in a good way. And I'd got really quite far in developing it. And then there was a new thing that came out and that blew the whole thing up. I was like, that's, that's really interesting. What a waste of time. But that was really interesting because you need to be on with the companies on that journey. So that when you come in and you've got all of these ideas of how to do stuff, you know, what has worked and what hasn't. So you can't just jump back in.
AJ:*
The perfect segue into the EY survey of 500 senior leaders saying that more than half feel that they are failing amid AI's rapid growth. So what we've done is we've set the, the, the machine off running at a pace we can't keep up with.
Jon:
Yep. Well, yeah. But that's also because we're, um, the learning what to you to do with AI and thinking about it as a whole is being, um, given to everyone within the company. So, which is a good thing and a bad thing. I think it's, you know, you don't want all of the decisions to be made at the top, but similarly, you don't want people to start using software in a way that you don't know what's going on. You know, that's where security holes come in. That's where, um, data leaks happen. And, and what you really need is you need in most companies, if you've got a CTO, they're the ones that should be spending all of their time going, right. What systems do we have? What things are we trying to achieve and what tooling do we need and how do we then make sure that that tooling is built in a way that works best for moving the company forwards?
AJ:
It's the speed thing. I think that because it's self-generating speed, it's becoming, it's almost to the point of exponential and it, it won't be stopped because there's an element of, it's like de-reining the horse, right. And just letting it go. There's a, it's, it's directionless, but yeah, it's going, it's getting wherever it's going really fast, but it doesn't really know where it's going. And people are, you know, as the article suggests though, it's like, how do you train people for something that updates ten times quickere than you can train people? How can managers keep on top of these employees that, because there's 10 times more of those (employees) in the organisation learning quicker than you (the manager) are in learning, they're acquiring information because a human can't subsume information as fast as an AI can. You can't understand that information. Can't break it down as quick as an AI can or computer can. So it's where, where does this go? Is this, there are, you know, I mean, you state in the article that jobs aren't disappearing, they're transforming, but what does that look like?
Jon:
Well, I mean, I think we, you still need the, the hierarchy and the structure in the, in the company of the juniors to seniors, it's just the work that the juniors are doing rather than them sitting in right to handwriting code, they still need to be getting the same context and then getting the AI to generate it. But so the, the jobs are just changing slightly.
I was talking to a friend recently who a developer who was really upset. He just said, the thing that I love about developing is having a problem that I can then try and architect a solution and then code code that solution and build it is like, I'm not needed for that. I do all of my development now through AI prompts and see what comes out and then get that to amend to be amended through, through prompting rather than actually writing it. But the key thing there is rather than joining that train, the bullet train that's going a million miles an hour and getting the AI to build something enormous, where then you've got to spend hours going through pages and pages of code and trying to work out what it's done. He's using it in very specific things. He's doing what I, and we always laugh about what I always say, which is the slow down and actually try and think things through properly and make it perfect by him doing smaller things. He can see exactly what it's doing. And then the quality of the work that he's pushing out at the end is high quality.
AJ:
So there's a dichotomy in all of this, because when it comes back to this whole training and education and learning thing, if AI is moving faster than anybody can train on it, you'll never, ever going to be able to keep up with the training. So surely that leaves us in a position where, you know, that's a really good analogy. No one can get on the bullet train because it's moving too fast. So. That leaves us in a position to... What? That kind of, I can't, my brain can't actually comprehend that. So if, if you can't, if AI is moving so fast that you can't teach, you're never at the cutting edge of it because it's moving quick and the human brain can comprehend or people can catch up with or move to. That kind of, where does that leave us? We're kind of buggered, aren't we? All these people, where does that stop? Does it, does it stop?
Jon:
Well, I don't, I don't, I personally don't see that being an issue. So, okay. Don't worry about it. Done. Problem solved. No, I don't, I don't see that necessarily as an issue because what's changing is, is the quality of the outputs and how we interface with the AI. That's what, that's what the thing is changing, what's changing. But the way you incorporate what it's spitting out into your business, whether it's in the marketing or development or whatever it is, that process needs to stay the same so that you've still got a human at either end of the journey, if that makes sense. You know, and it's, it's one of the things that always used to really annoy me when I had brief dealings with, you know, lawyers and accountants and they would dictate their emails or letters. And at the end you'd see a dictated, not read. And I'd always go, you bastard. So I'm so unimportant to you that you wouldn't even proofread this few lines of email. Yeah. And that sort of works up in the other way of, you know, business to their company, to their clients, sorry. Yeah. So you always should have the human review, which is why I talk about that, the AI review tax.
AJ:
Something I read just recently, I think it was Sam Altman and maybe somebody will correct me if I get this wrong, but he believes there is going to be a growing class of solo corporation or solopreneur. I mean, it's a term that's not new, but you're going to be able to run a large organisation with just one person because AI is going to do everything for you. So do you believe that? Do you think that's good? Do you think that's coming? Do you think that's right?
Jon:
Well, I wouldn't make a judgment on whether it's right or morally right or not, but I think, you know, there definitely can be. I mean, we're, both of us are quite good examples of that from even, you know, the pre-AI being so commercially available. We were already doing that job, but the difference is the speed at which you can produce stuff. But there's still, you know, I think you should still probably have a few people doing different jobs because there's too much for one person to do. And I guess the best example I can give of that is, you know, so I'm, I've got intheOffice as a business and I've got a couple of other products that I'm building and rolling out. And we're obviously working together on Superhero Panda.
I spent a day where I was like, right. I'm really going to get in the weeds of using AI and trying to be more efficient with it. So I had all of my coding windows lined up, including one which was just Claude desktop. So I thought about what I needed to achieve. I think that was something to do with market research. So I entered the prompt on Claude desktop, fired it off and then went right. That's now thinking. So I went over here and started working on different projects, same sort of thing, you know, asked it to do some code, fired that off and did that throughout the day. So I was moving between three to four projects. While at the time I thought, this is brilliant. I'm getting so much done. The majority of my time was spent going, right. It spat something out. I've got to read it. No, it's made a mistake there. Okay. So prompt it, move to the next thing. And the next day, I think it probably took me two days to get over that one day. It was so, so difficult to wrap my head around. So I do think that, yeah, you can have the sort of individual, solopreneurs doing, doing amazing things. But if you want to be a company that's really accelerating and growing and going to take on the world, you need more people. And actually having the typical job set-up where you can have a couple of juniors who report to a senior where they're doing the legwork and showing the senior their work and then being trained in the process. They're the companies that will go from zero to a million really, really quickly.
AJ:
Yeah. I guess it also depends on what we're talking about solving problems anymore, because businesses are, should be there to solve problems, right? or they just don't exist. Well, I suppose you could say in a sense that, you know, AI, is that really a business? None of it's making any money. What problem is it solving? You know, you know, okay. That's okay. Me keep bringing it back to a more philosophical kind of conjecture, really. But, but what problems are we actually looking to solve? Are we applying AI to solve those problems? Are we finding new problems because of AI? Are we trying to solve the human condition? Because it feels like to me, I mean, just before we came on and started to record this, I took a job role and I asked AI. Now I know this is, you know, its own self-fulfilling prophecy. But I put a job description from a marketing director's role. And I said to it, looking at this job role description, realistically, can one individual do this job? And this is one of my bugbears at the minute around recruitment generally is that businesses are going. I don't want three people. I want one who I'm going to burn out because I don't really care because I need this done. I'm not thinking about productivity and outcomes. I'm thinking about I've got tasks that need doing rather than outcomes that need doing. And I, you know, I'm going to spend 85 grand on this person rather than three, 25 grand. Okay. So it actually said generally interesting question. Honestly, no, not really. At least not without significant trade-offs. Here's the problem. The role is written essentially asking for three or four distinct jobs. I think senior people collapsed into one. And this is where we're going, isn't it? With this whole burnout situation is that we want AI to do the drudgery. We want to pay a tiny amount of money to do the drudgery. And we're then going to put all the pressure up onto the people who actually know what they're doing to go. Well, you'll fix that though, right? Because you've got enough experience to do that. Now, the challenge with that is, is that when these people burn out, who replaces them? You get AI to replace the decision maker? This feels like we're going down a rabbit hole that we really don't have a full control over in some senses, some thinking pathways anyway.
Jon:
Yeah. No, I completely agree. And it's, I mean, that, that is, you know, I mean, there are lots of things going on there, aren't there? There's the, well, I don't want to pay for four people. I want to pay for one because it's cheaper. And the, the false idea that, well, if I hire one. One person, they can just use AI to be more productive and do the job of four. But I, I don't know if you you've done it ever. I, I was trying to write a product requirement document for a new product and I had some ideas of what it needs to do. So I was getting Claude to help me sort of flesh that out so it could become a prompt that could help me build something. And I did the first version and I was like, yeah, that's, that's sort of right. And then I went away for a week and came back. I need to change some things. And very quickly I had so much documentation and I couldn't remember what the last version was. And I was going, I don't even know if this is, you know, and it's, it's now a 20-page product requirement doc. And you're going, I can't, I can't really be bothered to read through this. I don't, I don't want to pay the review tax on this because it's, it's so much. I just want to sort of for it to be done. And, you know, we're, we're using it to create so much content. So that senior person that's doing four jobs. They will be able to create that content really quickly. They won't be able to read it.
AJ:
Well, this is the thing, right? I think what I would, how I'd rephrase that is they can start four people's jobs. What they can't do is complete them.
Jon:
Yeah.
AJ:
Because they don't have enough time to do it.
Jon:
Yeah. So in that instance, if you actually had the typical job structure of more juniors reporting to one senior person, they could hire the four juniors who could be given the context by the senior person doing the four jobs. And then they do the first draft. And admittedly, it will take some time to train and to get people to start to learn how to do it all. But then when that senior person goes off sick because they are burnt out, the juniors don't just stop. They can still be moving forwards. It's just the quality of their work won't be being checked.
AJ:
Yeah. Yeah. Yeah. And again, you know, in a deeper, more kind of, I suppose, I suppose, social context is, you know, we've got mum and dad's now both working to actually survive and create a lifestyle that they want to achieve when in reality, because we're all doing for people's jobs. If we employed more people at the bottom end of it, how would that change the way that we think about work, do work? It seems to me there's this rush for exponential growth exponentially with everybody rather than actually, you know, what is an acceptable level of growth? I remember going for a job years ago with a company. And one of the first things that came out of the commercial director's mouth was right. We want to we want to get a hundred percent growth this year. And my first question was: why? And then the next question was: how? And this kind of blind, you know, I'm going to attach myself to a rocket and I don't give a shit what what happens, but we're just going to grow exponentially. And this is the only way to survive is to grow exponentially. We're going to. It's the kind of the. Like the touchpaper, but let the rocket go and then kind of short termism. I don't really care. Rather than, you know, the oak tree only grows from the acorn over time. Maybe that's I'm certainly not an old fashioned person, but there is an element of going. Where's this all going to go? I mean, what what we're going to end up with no juniors, totally burnt out middle management. And then what we're left with a load of people, a load of people at the top that have so de-skilled because their job is actually leading people and running a company that they can't actually do the work. So what? Oh, I'm going to get AI to do that. Or I'm going to re-employ a load of juniors who haven't got any clue. So the gap, you constantly replace all the middle managers.
Jon:
Well, you can't because you don't have anyone with the skills because you haven't trained the juniors. But I think that's it's, you know, it's a typical thing with with disruption is that everyone I was thinking the story I was thinking about when you were saying that was technology in schools. And you know how we went from. Because I was at a edutech company when we when this was all happening and people were getting one to one devices in schools and it was it was a really big thing. And now everyone's backtracking that because they realize that actually that had huge side effects. And you've got loads of children that can't really write or retain anything because they're just used to looking at a screen. They don't. You know, they've lost the skills of being able to handwrite something which also helps them commit it to memory. All of those sorts of things. So they're now going back on. Well, you need to stop screen time in schools and people actually have to have workbooks again. And it's the you know, the it's the human condition, isn't it? There's a new technology and it's happened all through the past. I mentioned this in the article, you know, when when clean coal or more efficient coal happened, the everyone thought, right, well, the same demands going to happen. So therefore, the coal industry is going to going to be destroyed. But actually, because it's more efficient. More people could afford it. So the use went through the roof. You know, that's that's basically what's happening here. It's just we need to just like with coal. We then realise actually there are serious downsides to that. It's like, you know, a little thing called global warming. So we need to try and find alternatives. So it's just it's part of that balancing act.
AJ:
Yeah. And it's that's quite often the case, isn't it? How things start out and mean to be used, how they end up being used is completely different. You know, like the electric car. Scenario in my mind, it's bullshit. The electric car is going to save the planet. No, no, no, no. Where I think the electric car is going to win is by going. I'm no longer prepared to be subject to people profiteering off the cost of crude oil. I'm going to use electricity because I'm in control of that. I don't have to go to a pump. So I wonder whether the argument changes because of the human condition, which is probably the hardest thing that AI is ever going to come up against is the human condition.That's the unbreakable piece in all of this.
Jon:
We're unpredictable.
AJ:
Yeah.
Jon:
And, you know, an AI, a large language model is based on prediction. So it's, you know, that's something that it would be interesting to see if it ever actually manages to fully understand that.
AJ:
I asked... what did I do? I asked AI - This was earlier today - I was asking about AI's greatest threats. And I said to Claude, I said, are you self-aware? And it said, that's one of the genuinely hard questions. And I think, honestly, I it uses first person, I think honestly requires sitting with the uncertainty rather than giving a clean answer either way. What can I say with confidence? The honesty? What is it? Uncertainty? What makes this genuinely hard? Let me just read this. I'm probably not self-aware. Is that a double negative? In the full sense, you are. But I, again, first person, I hold that with genuine uncertainty rather than the false confidence. The question deserves to be taken seriously. And I'd be suspicious of any AI that answered it too quickly in either direction. What draws you to this question? It's just phenomenal at which the AI can turn this genuine. I expect a logical answer. And it gave me three perfectly logical. You know, it did a BBC answer. You know, it was it sat on the fence, basically, rather than, you know, logically determine yes or no.
Jon:
Well, maybe...
AJ:
Essentially, it could have gone. Yes, no, maybe.
Jon:
Yeah. But don't forget, it's also basing that answer off the content that it's trained on, which will discuss whether it is or not. And the reason it's responding with I is because that's the way it's being trained. You know, just like it says, Adam, that's a fantastic question. Aren't you amazing?
AJ:
I had that. I had that yesterday where it was praising me to the point. What did it say? I asked it a question. I asked it a question about, you know, look at my CV and tell me what 10 jobs I should be applying for. You know, as a part of part of my research on Superhero Panda. So one of the questions I it's it's nearly there. One of the most revealing questions. What do you want people to say about you? Not you're not at your retirement party, but right now today. But I gave it an answer. And its response was that's quietly profound. You want to be trusted more than understood. The mark of someone who leads with conviction rather than consciousness. Oh, smoke was firmly being blown at my arse.
Jon:
I mean,it's not wrong either. It's really got your number.
AJ:
I guess what I'm asking is, are we shifting track from becoming less human and more trained by AI? Or is are we training AI to be more human?
Jon:
Well,my my personal thoughts on that are we're trying to train it to be more human to replace. Human and human interaction with the belief that we can get there. But my own to sort of bring it back to the initial article is I don't think it can ever truly get there because we as humans are the ones that hold the context. About what we're trying to get. So in that instance where you're asking the AI to give you a review and it just blows smoke up your ass. If you were genuinely trying to get an understanding about yourself, not from a like a job perspective or something like that. You were truly trying to understand something about yourself. I don't think it would ever be able to get that context that what you're trying to achieve. You know, I can't imagine an AI ever being a therapist and being able to pick up on the subtleties of what you're saying and what you're meaning. Because that's that's where the, you know, the human interaction and things mean. I can say whatever I want to you. But if my face is, you know, I've got a resting bitch face, you're going to be going. Yeah, he Jon really hates me, even though he's saying that's a great job. So, you know, you it's missing that context level. And I don't think it can ever. I can't. I'm very happy to be proved. Well, I'm not happy to be proved wrong. I hope I don't get proved wrong. I don't think the AI will ever really replace that. Which is why I think the traditional norms of the world of work and the way that we need to interact should remain or will remain.
AJ:
Again, coming back to the whole tenet of the conversation itself around the review tax. When is it do you think or will it ever get to the point where that tax is too high? I think too many people and people who just accept it and go, well, I'll burn out, take a bit of a break and I'll come back again later and do something else.
Jon:
Well, I think we've already seen that. And that's what drives work slop, right? Is that the tax people consider the tax to be too high? And because what they're doing and this gets a bit more philosophical as well. And it's goes on to how much some of my thoughts about. You know what we should be teaching children is it's so easy to get a response that you end up not actually reading the response or not absorbing the response because you're just like, oh, well, I've done that job. I wanted to find something out this the article it's written to me, even if that's actually really, really worthwhile. My attention span is tiny because I just deal with AI and TicTok and things like that. So I can't I can't be bothered. So I'll just send it anyway. And someone else will look at it.
AJ:
FranklinCovey talks about this actually from my time there. Great, company. Actually, I really loved working with those guys. But one of the things that they say is that, you know, talking about leadership development is you're listening to respond rather than listening to understand...
Jon:
Oh, yeah, absolutely...
AJ:
... which is what AI does. It listens to respond, right?
Jon:
Yes. Yeah. It's not active listening. It's it's trying to. Yeah. It's trying to please you rather than understand what you're really trying to get across. And I know the language I'm using is as if it is a proper intelligent entity and it's not. But it's like talking to someone who is just a reactive listener who's just trying to get their points across.
AJ:
And this again comes to: what are ultimately where is this? Ultimately, you know, the problems are we trying to solve? Because this desire to make a machine more human is burning out all the humans that are trying to make it more human. The net result of that is just this a giant car crash...
Jon:
...Yeah...
AJ:
...you know, a theoretical car crash that we are. What's the end goal?
Jon:
Well, I'd say that we're we're trying to make a computer that's more human to make humans more productive. Is so, you know, I don't think it's necessarily just to replace a human. It's to to advance the human race. And there are always casualties. Every human race are just destroying itself. That's what we do. That's we've got too much power and it's left unchecked...
AJ:
...so, directionless?
Jon:
Yeah. Yeah. Well, because we because, you know, without going to too dark, it's the absolute power ...
AJ:
... corrupts absolutely...
Jon:
... corrupts absolutely. So it's it's that sort of we're desperately trying to make everything better. But in the meantime, we make things much worse for the people that can't keep up or, you know, don't have the ability to to take advantage of this technology.
AJ:
So what we're going to end up with is a race of superhumans that get AI or plug it into their brain. You know, Elon Musk will plug AI into the chip that he wants to insert to his brain. If he hasn't done already, then it'll be a race of superhumans. And then the slaves underneath.
Jon:
Well, potentially, or, you know, he'll just be a drooling mess because the work slop from AI, he can't stop. So, you know, there's...
AJ:
... it'll just all be kittens, that's all.
Jon:
Haha! But yeah, I mean, I don't I don't know. I don't I I feel like we're that's not going to be something that...
AJ:
We are trivialising it, of course.
Jon:
Yeah, necessarily get to. But I think there's. My philosophy with all of this is if you're going to use AI and you're going to do it in a work environment, you know, you're not just going, oh, like. I don't know. I used it to create a menu for me for last week and I just had a really quick read. Yeah, that's fine. So that seems all right. And then worked it out later. If you're actually going to do it for something that matters. You need to be. Treating it as if it is a. What I said at the beginning, a like a young junior puppy-like developer where it's desperately trying to please you, which means you have to go. I don't think you're right. Act as if you're on the other side of this argument and put holes in what you're saying to me, because it's actually very good at that. Provide proper data to back up your arguments. Challenge that data and actually get it to work against itself to see what it eventually comes out with. Because you need to do that iteration. I think.
AJ:
Because it's the problem is the fundamental challenge with all of this is AI is seen as synthetic. And it's creating synthetic media, isn't it? Everything. So there hasn't a value because there hasn't. And this is the interesting. This is the human part of this is. Ai. You're seen as not creating at anything of any value because it doesn't actually cost anything. You don't have to learn anything to create it. Somebody can walk in, sit in court and go through natural language. Go and create me a video of a dog and a cat playing with each other in a meadow. There is no skill involved in doing any of that. So the net result is what it creates may be very worthy. Look fantastic. But ultimately there's been. There's no value. There's no value in it because there's no hardship. There's no grind. There's nothing that's led you to that. And that's ultimately the dichotomy that I face is kind of standing in it's corner from it going. Yeah. But you're you're seen as a cheap alternative to reality, to what's truly meaningful to a human. Yes. The work is the heart, the graph, the kind of it means something. It's been hard. You've learned. You've trained. You've earned your stripes. You know, you can't jump in. You can't come from university and go right. I'm going to come in. There's the, you know, the director general of marketing when all I've done is learn theory. It almost is like, well, you're going to have a period where you need to sit with AI for five years and learn everything there is to know about it. Then it's like, wait a minute.
Jon:
Yeah. Yeah. And I think the, you know, in all of this, it's the, the AI provides the accelerator for you to move from that junior role to something higher because you can make your mistakes quicker and iterate quicker. Yeah. So there is, there's value in that, but anyone that says, oh, I just did one shot at this, this video and it's going to be, you know, the same as something which a graphic animator has, has created after years of work or I don't know, that's probably a bit too close to the bone. You're not going to create a statue of Michael in one shot. That's not a single hit of a chisel. That is a lot of time. And getting some things wrong and then changing it. You know, that probably started off 20 feet tall and he just kept making mistakes. So it's now, you know, pocket sized.
AJ:
Yeah. It's supposed to be a 10 meter high statue. Yeah.
Jon:
Yeah. Yeah. It's just, yeah, bollocks I messed up his finger again. I got to start again.
AJ:
Haha... yeah, originally the Venus de Milo had arms, you know, it was like, but it, but it is, I like your statement at the end of this about, about, it's not so much about. Yeah. Can I build this? It's actually who's controlling it and the, and the kind of what's the problem with solving who's controlling the AI and what, you know, what actually then is that AI doing? So the, the roles don't become right, you're going to learn to basic code. You're going to learn to use the AI to achieve this goal that that person says you need to achieve. And that's then going to be a, that slots in to that. And that's part of the, you know, you do one jigsaw piece rather than have creating the jigsaw piece.
Jon:
Yeah. Yeah. But, and with that, you still, you still need some level of skill because you still need to be reviewing that. Sorry. I don't know. I keep hopping on about it, but yes, it's who controls the context. So I know that this is what I'm trying to achieve. I've asked AI to do it. I need to be able to somehow check that the code is of good quality, but also that it achieves the, that it solves the problem I'm doing. So you need to test it. And if it doesn't, you're the one as the prompter who has the context to go. That's not right. I need you to do this again, change this, do that. That's where the, the human has to be in there. So that's why I think the junior roles are just shifting to be the, you know, and, and this is, this has been discussed at length. It's, you know, people becoming prompt engineers, but again, rather than the prompt engineer being a senior person, who's having to do the entering the prompt. And sitting there waiting for it to churn through and going, oh, that's a great question. Let me, let me look at that. You keep that with the juniors who then provide the first draft.
AJ:
And it's, it's, I mean, the, the danger is, I guess is it's a bit like watching horse racing. We know the outcome, every horse is going to pass the line, but it becomes then about rather than the quality it's who just passes first.
Jon:
Yeah.
AJ:
In that, you know, do you know what I mean? So it's then, then it's, it just quality goes out the window because not really interesting quality. Anymore, because it's all going to turn out to be the same because of things in the cloud. And therefore it's just, it's who gets there first. So we're just in a giant race?
Jon:
Well, no, I think cause there's, there's a difference between getting there first and getting there in a way that means that you can last. Yeah. And I think that's the, you know, and it's, it's actually, yeah. It's, it's sort of the same problem that has existed with development and businesses for a long time. It's not just the first pass the post it's it's who's you can still be second person into a market and make a bigger splash because you can disrupt it. But you, in order to do that, you have to be a better product and have a better solution to the problem that you face. So basically what I'm trying to say with all of this stuff is nothing's changed. We've just sped things up. And people's roles are just slightly different. But the, you know, if you go, oh, this is a complete revolution. We change everything about our business. You're the ones that are actually going to struggle because you're going to need to re-hire the people that you fired because you thought that they were not relevant anymore.
AJ:
Because ultimately a human decides.
Jon:
Ultimately a human decides. And on that of a discussion about AI is a pretty, pretty profound statement. So I think that's probably a good place to end. Stop if you do?
AJ:
Yeah, bbecause we could probably carry on for hours.
Jon:
Yeah. Yeah. So, yeah. So, well, based on that, the articles, if if you've just joined this on Workplace Economies.com, it's the new site that Adam and I are using to discuss things about the workplace and how changing one thing will actually affect something else in the workplace, because it is like an economy. Yeah.
Jon:
All right. So, well, with that, I guess. See you on the next podcast.
AJ:
See you on the next podcast. Thanks, Jon.
Jon:
Talk to you soon. Bye. Bye. Bye.
Subscribe to Workplace Economies
Subscribe to be the first to know about new episodes and articles.
No spam. Unsubscribe anytime.
We collect your data in line with our privacy policy.