![](https://www.strategicagenda.com/wp-content/uploads/2024/10/Strategic-AI-Template-banner-3-768x432.png)
Written by Fola Yahaya
Let’s be frank, the modern economy has evolved to the point where most service-based jobs add little value to society. No matter how we spin it, most of us are fully replaceable by unthinking, unfeeling machines that are more efficient than we can ever be. These kinds of jobs can, and ultimately should, be replaced by AI. But what do we do if our work is no longer necessary/done by robots?
Esther Dyson writes a lovely piece about this in The Information that chimes with my instincts about where this is all headed. She argues that we shouldn’t worry about AI ‘stealing’ jobs or evading our control, but rather focus on using AI to automate ‘sub-human’ routine tasks, using the money and time saved to do more meaningful work. “That work starts with training other humans: kids learning from well-paid, engaged caregivers; patients talking with real doctors and nurses, not just bots and machines; students learning not just to remember facts but to ask provocative questions; teenagers interacting with human mentors instead of influencers ‘trained’ by algorithms.”
She makes a compelling argument that we should focus less on teaching our kids STEM (science, technology, engineering and maths), coding or how AI works, and more about how people work – and how businesses so often make money by manipulating people to buy things they might not need. Fundamentally, we need to work on becoming better humans rather than worrying about AI making us less human.
As a management consultant working for the UK government, there was often an ‘intelligent customer’ clause designed to ensure that buyers should understand what they were buying. In reality, consultants thrive on their clients’ ignorance and wouldn’t exist if clients knew what they needed and were buying.
But with ChatGPT, customers no longer have an excuse to remain in the dark. Clients can now find out what the service (or good) being bought should look like and how it should be delivered. For example, if your firm is considering hiring a management consultant to create an AI strategy, you can simply ask ChatGPT to “imagine you are a McKinsey consultant tasked with creating an AI strategy for [insert sector and more details]”. This means everyone needs to up their game, which is my key thought of the week. Gone are the days of winging it by trawling the web and cookie-cutting. GenAI creates ‘intelligent’ clients and we, as service providers, need to be more than just one step ahead of our clients.
One of the first things they teach you as a management consultant is how to structure whatever you’re peddling, communicating within a compelling framework. McKinsey uses something called the ‘Pyramid Principle’; others use SCQA (situation, complication, question and answer). Both focus on leading with the conclusion, then providing key arguments and finally supporting them with detailed information.
Structured frameworks are like manna from heaven for ChatGPT. So, the next time you need to communicate something, ask ChatGPT to “structure the response like a [insert name of framework]”. You can find some of the most useful ‘thinking’ frameworks here.
A new study from Leipzig University backs up what we all know: Google is becoming unusable as a search engine because of search engine-optimised, increasingly AI-generated poo. I now do most of my research via an AI-powered search engine. These tools give you the power of natural language questions with search results that are relevant to your question, rather than trying to sell you something you don’t need. This is the main reason why Google is panicking and has lost its way. There are many AI search engines to choose from. Both Bing and Google offer a flavour of this, but if you want ad-free, try something like Perplexity.
I attended an AI breakfast meeting hosted by the global financial services firm UBS on Thursday. The keynote speaker was my old friend Azeem Azhar. Azeem, a very smart cookie who used to be Head of Innovation at Reuters, recently wrote a bestselling book and hosts an AI show on Bloomberg TV. He gave us his unique insight from co-chairing the AI panel at Davos, the WEF shindig where the Illuminati meet once a year to carve up the global economy ;-). My top 3 takeaways were:
Another of my takeaways from Azeem’s talk was the pincer movement that’s going on inside firms. Clever employees are not waiting for the OK from slow-moving management, but are using GenAI on a daily basis. For me, these are the smartest people in the room; those who embrace AI before it overwhelms them. By embrace, I mean:
Can AI help us commune with the dead? Check out this interesting podcast from the Economist on how companies are helping keep our loved ones alive forever. In summary, companies are training AI models on deceased persons’ data (emails, videos, social media posts and personal recollections from friends and family) to create digital models of them. This is then connected to an AI-generated video avatar, which can interact with the deceased’s family in real time. Connect this to a hologram and you have a digital Lazarus.
That’s all from us this week, folks. Next week, Strategic Agenda will be reporting live from the World Artificial Intelligence Cannes Festival (WAICF) in (hopefully sunny) Cannes. PM me if you’re attending and would like to meet up.
Network Hub, 300 Kensal Road, London, W10 5BE, UK
We’ll send you quarterly updates so you can see what we’re working on, what the future holds and how we’re shaping it.