I'd add what a lot of media isn't really emphasizing: Trump.
Trump's actions since assuming office have dramatically increased uncertainty and volatility, especially in financial outlooks, across the world. There is a broad consensus that the tariffs, the bizarre saber-rattling (Canada?!?) and the inconsistency are how you start a recession...from such pinkos as Jamie Dimon, as well as accomplished liberal economists like Brad deLong. (Whose substack is awesome, btw.)
Given *that*, with the promise/threat/big question mark of AI, new initiatives have been put on hold across the board. As a consultant, this is all my space is talking about...repositioning to emphasize critical stuff that increases revenue. Keeping terrible clients. Not raising prices, which means not growing. Most companies seem to be defaulting to the CFO, who is always a voice, of course, but not the driver of the car, and the CFO's default is "No" even in certain times. Similarly, hiring new and inexperienced staff is just a no-go "until this shakes out."
It's easier and safer to blame AI. AI is going to do terrible things to the economy, I have no doubt: the problem is that the "alternative jobs" it creates tend to be too advanced for an easy transition. You could teach a farmer how to stand in front of a conveyor belt and screw a thingamajig into a whatchadoodle thirty times an hour. Teaching a mediocre marketing manager to become a data scientist is a much less viable proposition. But the real answer is everyone's afraid the world financial markets are going to blow up, and there's too much fear around discussing the obvious cause.
I've long believed that the future is small...there's just going to be a lot more of it. Look for more independent pathless path folks using the new tools to do what it used to take a firm of 10 to do.
I found especially your experiences watching companies go into "hunker down" mode particularly interesting. Makes me think of a favorite quote: "The future, as always, belongs to the brave."
We're in a bubble. These guys are pumping up the hype because they need the attention to raise more cash and if they don't get the cash, the companies die. AI is not viable in the longer term on the current business model. It's either going to get a lot more expensive or it's going to get a lot more scarce.
Besides, technology adoption is incredibly hard and AI is nowhere near cracking that yet. As sources you refer to have pointed out, there are some massive assumptions in the arguments of the AI boosters.
I think you make an excellent point as well that we don't really understand what it is that AI will replace. Jobs are fluid and dynamic because the people and the relationships that are the context they are in are fluid and dynamic. An org chart is not how the organisation really works. A process diagram is not how the process actually works. If companies use AI to replace what they think their people are doing, it's likely to bring about their demise. We've been here before, remember GIGO, coined in first wave of computerisation?
Great piece and good timing. Ethan Mollick has a complementary article today that drills into the million-little-details challenges of incorporating AI in the enterprise.
The reason that they say AI researcher will be the first job to be automated is because that’s what they’re trying to do. The first firm that figures out how to automate AI research will pull way ahead of the other firms as they’re able to sick an army of 400,000 researchers that work 20 times faster than a normal researcher 24/7/365 with no breaks.
The recursive self-improvement loop requires some sort of executive function capable of assessing the outcome of an action and comparing against some ideal for fitness testing.
I don’t see any way to create that ex nihilo. You can’t just throw another LLM at it, because that would require its own arbiter of goodness for those decisions.
I'd say running (A/B testing) models in production and then gathering some metrics about usability, correctness, productivity, etc. can serve as a very large, accurate assessment function. For example, if you were assessing a coding agent, you could measure the error rate (times it made a code change but tests failed).
Excellent and helpful Paul. Thank your for sharing your thinking and perspectives on where this is all headed. I particularly appreciate the tasks/jobs distinction. From this framing there will always be "jobs," defined as the things that humans do, and as jobs become replicable tasks new human jobs will be created.
Lots to think about here, and I like what you added to the discussion. The analogy of the old fashioned chain letter feels apt. I frankly think it's a great wake-up call for more people to wake up (whatever happens with AI) to more intentional living, exploration, creativity, detaching identity from work-for-status, that sort of thing.
I loved the framing of this essay. In fact I thought it could have gone even further with the idea that having a job is necessary to keep the peace in society.
As long as AI continues to hallucinate (and there's no indication right now that it will stop) then that puts a significant brake on a lot of the daft talk - at the end of the day you need to be able to rely on your tools, and even the keenest proponent wouldn't trust unsupervised AI to, say, publish their annual report or drive their family around town.
o3 is much better at showing its working (and I have to say I love watching its thought process), but I've been using it as a helper with resume writing recently and I've found it's really comfortable with just making shit up. I had to go back to 4o, as it seems to make less up, and is quicker about it.
“We have three different work realities when it comes to how much people work: Asia, the Anglosphere, and Europe.”
There are more regions of the world than that, and there is quite a bit of variation within those parts, too. I think a more helpful view is that humans’ relationship to work is not, and has never been, static. We have the power to shape our relationship with AI, too.
Good stuff Paul. There seems to be a lacking set of 2nd and 3rd order discourse in the "normie" dialogue on this topic which is a bit frustrating because all you hear are CEOs making claims and founders of AI research labs making claims that feel sort of self-serving and a bit in the lane of the fox guarding the henhouse. It get's a little hard to take seriously as they continue to say these things, not offer solutions, lobby for removing any kinds of regulations that might force alternatives and plow ahead on their same course.
To be sure, 10% unemployment would still be really difficult and have a widespread impact on the United States (and other countries) but that in of itself underscores why it's probably more worthwhile to start getting creative about how this could be use to reimagine how work gets done (or perhaps even, create more "good work" for more people) versus wondering if it will destroy white collar work.
Loved this Paul! I read and enjoy most of your stuff and feels like your 100 mph fast ball. Agreed with the tasks / jobs distinction, that sums up a lot of what I’ve been thinking.
The best refutation of the claim that AGI is imminent is the behavior shown by both AI firms and researchers. AI firms chase comparatively small-time ventures like automating programming instead of pushing toward AGI. Researchers hop from one firm to the next, optimizing salary.
If AGI represents infinite status/money, then you wouldn’t see either group behaving in such a way. Researchers would be sticky at one/several firms. AI companies wouldn’t need to chase highly applied distillations of their tech when AGI could easily subsume those.
Good stuff.
I'd add what a lot of media isn't really emphasizing: Trump.
Trump's actions since assuming office have dramatically increased uncertainty and volatility, especially in financial outlooks, across the world. There is a broad consensus that the tariffs, the bizarre saber-rattling (Canada?!?) and the inconsistency are how you start a recession...from such pinkos as Jamie Dimon, as well as accomplished liberal economists like Brad deLong. (Whose substack is awesome, btw.)
Given *that*, with the promise/threat/big question mark of AI, new initiatives have been put on hold across the board. As a consultant, this is all my space is talking about...repositioning to emphasize critical stuff that increases revenue. Keeping terrible clients. Not raising prices, which means not growing. Most companies seem to be defaulting to the CFO, who is always a voice, of course, but not the driver of the car, and the CFO's default is "No" even in certain times. Similarly, hiring new and inexperienced staff is just a no-go "until this shakes out."
It's easier and safer to blame AI. AI is going to do terrible things to the economy, I have no doubt: the problem is that the "alternative jobs" it creates tend to be too advanced for an easy transition. You could teach a farmer how to stand in front of a conveyor belt and screw a thingamajig into a whatchadoodle thirty times an hour. Teaching a mediocre marketing manager to become a data scientist is a much less viable proposition. But the real answer is everyone's afraid the world financial markets are going to blow up, and there's too much fear around discussing the obvious cause.
I've long believed that the future is small...there's just going to be a lot more of it. Look for more independent pathless path folks using the new tools to do what it used to take a firm of 10 to do.
I found especially your experiences watching companies go into "hunker down" mode particularly interesting. Makes me think of a favorite quote: "The future, as always, belongs to the brave."
We're in a bubble. These guys are pumping up the hype because they need the attention to raise more cash and if they don't get the cash, the companies die. AI is not viable in the longer term on the current business model. It's either going to get a lot more expensive or it's going to get a lot more scarce.
Besides, technology adoption is incredibly hard and AI is nowhere near cracking that yet. As sources you refer to have pointed out, there are some massive assumptions in the arguments of the AI boosters.
I think you make an excellent point as well that we don't really understand what it is that AI will replace. Jobs are fluid and dynamic because the people and the relationships that are the context they are in are fluid and dynamic. An org chart is not how the organisation really works. A process diagram is not how the process actually works. If companies use AI to replace what they think their people are doing, it's likely to bring about their demise. We've been here before, remember GIGO, coined in first wave of computerisation?
Great piece and good timing. Ethan Mollick has a complementary article today that drills into the million-little-details challenges of incorporating AI in the enterprise.
https://open.substack.com/pub/oneusefulthing/p/the-bitter-lesson-versus-the-garbage?r=62w6&utm_medium=ios
The reason that they say AI researcher will be the first job to be automated is because that’s what they’re trying to do. The first firm that figures out how to automate AI research will pull way ahead of the other firms as they’re able to sick an army of 400,000 researchers that work 20 times faster than a normal researcher 24/7/365 with no breaks.
The recursive self-improvement loop requires some sort of executive function capable of assessing the outcome of an action and comparing against some ideal for fitness testing.
I don’t see any way to create that ex nihilo. You can’t just throw another LLM at it, because that would require its own arbiter of goodness for those decisions.
I'd say running (A/B testing) models in production and then gathering some metrics about usability, correctness, productivity, etc. can serve as a very large, accurate assessment function. For example, if you were assessing a coding agent, you could measure the error rate (times it made a code change but tests failed).
Excellent and helpful Paul. Thank your for sharing your thinking and perspectives on where this is all headed. I particularly appreciate the tasks/jobs distinction. From this framing there will always be "jobs," defined as the things that humans do, and as jobs become replicable tasks new human jobs will be created.
Very much enjoyed this, Paul. Thanks.
Lots to think about here, and I like what you added to the discussion. The analogy of the old fashioned chain letter feels apt. I frankly think it's a great wake-up call for more people to wake up (whatever happens with AI) to more intentional living, exploration, creativity, detaching identity from work-for-status, that sort of thing.
"...a slow and continued reimagination and questioning of our relationship with work..."
agree with your concluding remark Paul, in a balanced article
I suspect the traditional job will deconstruct at different paces in different places - but there will still be plenty of work to do.
Insightful.
've archived it for a future review.
I loved the framing of this essay. In fact I thought it could have gone even further with the idea that having a job is necessary to keep the peace in society.
yeah i dont fully buy that claim or perhaps its just not interesting to me
thats mostly what most people believe i think
As long as AI continues to hallucinate (and there's no indication right now that it will stop) then that puts a significant brake on a lot of the daft talk - at the end of the day you need to be able to rely on your tools, and even the keenest proponent wouldn't trust unsupervised AI to, say, publish their annual report or drive their family around town.
idk the latest models are pretty amazing - gemini 2.5 pro / o3 are both excellent right now
hallucination seems almost completely solved
o3 is much better at showing its working (and I have to say I love watching its thought process), but I've been using it as a helper with resume writing recently and I've found it's really comfortable with just making shit up. I had to go back to 4o, as it seems to make less up, and is quicker about it.
“We have three different work realities when it comes to how much people work: Asia, the Anglosphere, and Europe.”
There are more regions of the world than that, and there is quite a bit of variation within those parts, too. I think a more helpful view is that humans’ relationship to work is not, and has never been, static. We have the power to shape our relationship with AI, too.
yeah i mean i suspect you got the point i was making, dont make me do a 5000 word essay on leisure and labor hours globally lol
Haha certainly! And especially not more discourse about Protestants vs. Catholics :)
Love this piece since I’m also sooo over these claims.
I also think they’re widely overestimating the speed and magnitude of adoption.
It’s not like we have unlocked even half of the automation potential that basic algorithms without AI could achieve (especially in Germany lol).
Well Germany seems pretty dedicated to slowing down everything 😂
Good stuff Paul. There seems to be a lacking set of 2nd and 3rd order discourse in the "normie" dialogue on this topic which is a bit frustrating because all you hear are CEOs making claims and founders of AI research labs making claims that feel sort of self-serving and a bit in the lane of the fox guarding the henhouse. It get's a little hard to take seriously as they continue to say these things, not offer solutions, lobby for removing any kinds of regulations that might force alternatives and plow ahead on their same course.
To be sure, 10% unemployment would still be really difficult and have a widespread impact on the United States (and other countries) but that in of itself underscores why it's probably more worthwhile to start getting creative about how this could be use to reimagine how work gets done (or perhaps even, create more "good work" for more people) versus wondering if it will destroy white collar work.
Loved this Paul! I read and enjoy most of your stuff and feels like your 100 mph fast ball. Agreed with the tasks / jobs distinction, that sums up a lot of what I’ve been thinking.
Ha thanks Steve
The best refutation of the claim that AGI is imminent is the behavior shown by both AI firms and researchers. AI firms chase comparatively small-time ventures like automating programming instead of pushing toward AGI. Researchers hop from one firm to the next, optimizing salary.
If AGI represents infinite status/money, then you wouldn’t see either group behaving in such a way. Researchers would be sticky at one/several firms. AI companies wouldn’t need to chase highly applied distillations of their tech when AGI could easily subsume those.
Yeah why isn’t Sam Altman writing his great novel yet