Written by Fola Yahaya
Despite the fact that I gave Claude a link to an organisation’s “Procurement Opportunities” page, it denied that there were any on the page, and it was only after I told it how stupid it was that it finally complied and created the table. ChatGPT, like an old friend who ‘just gets you’, got the table right the first time.
So far so good – useful AI that would save me having to pay someone for a really tedious task. The only problem: none of the procurement opportunities existed. Both Claude and ChatGPT, always eager to please, just made stuff up.
AI systems can’t (yet) tackle my top 10 most tedious tasks (see below), and therein lies the frustration. Sam Altman argues quite rightly that OpenAI’s near-term goal is creating an AI that is a:
“super-competent colleague that knows absolutely everything about my whole life, every email, every conversation I’ve ever had, but doesn’t feel like an extension.”
It seems that despite the heavy price we’re prepared to pay (lack of privacy and the Big Brother implications that I discussed last week), apart from writing boilerplate content, AI is still giving us little useful, practical applications in return.
This is all likely to change by the autumn with the expected launch of OpenAI’s next, all-powerful release, GPT-5. If it lives up to the hype (and so far OpenAI have yet to disappoint), then expect AI agents that really do impact employment and force us to confront the wider implications of letting the genie out of the bottle.
For the moment, don’t trust ANY unedited content by an AI, especially one that claims to be be able to browse the web.
In case you think that ChatGPT is more than autocomplete on steroids, you would do well to remember AI is like a hyper-keen intern that will do everything it can, including inventing new ‘truths’, to keep you happy.
Here is a list of what you shouldn’t use ChatGPT/Claude/Gemini for:
Buried in a blog post about safely developing AI, blah blah blah, OpenAI officially confirmed the rumours that it had begun training a new flagship AI model that would succeed the GPT-4 technology that currently underlies ChatGPT.
What was interesting was how capable they think Optimus Prime/Skynet/HAL/GPT-5 (or whatever they call it) will be.
“… we anticipate the resulting systems to bring us to the next level of capabilities on our path to AGI.”The new model would be an engine for AI products, including chatbots, digital assistants akin to Apple’s Siri, search engines and image generators.
OpenAI, and Altman in particular, are having a rough time as of late. Its co-founder and head of safety resigned last week, citing the company’s ‘move fast and break things’ culture, and this week came more bad PR from revelations that OpenAI’s board only found out about ChatGPT’s release through Twitter.
My seven-point ChatGPT-5 wish list
- Document formatting: How many hours do we waste as a species formatting office documents?!
- Converting/editing PDFs: If Adobe is so clever, why does it make changing and converting PDFs so difficult? Yes, I know PDFs are designed to be uneditable, but to err is human…
- Filling in forms: Another colossal waste of our limited lifespan. From sign-up forms to filling out a medical history, this is AI assistant fodder.
- Doing my taxes (properly) – obviously.
- Finding loopholes in said taxes (without hallucinating) – even more obviously.
- Some kind of spaced repetition system that helps me remember new information by prompting me to recall it at optimised periods.
- An AI that wards off senescence by getting me to think rather than doing my thinking for me. Imagine an AI that asks you probing questions about why you did what you did. A bot that encourages self-reflection and generally helps you be a better human. Now wouldn’t that be wonderful.
@ the UN’s AI for Good conference
Shorter newsletter this week as we’re in Geneva for the AI for Good conference. I attended this five years ago and pre generative AI and, on such an impactful topic, it was frankly as dull as ditchwater. My full, hopefully interesting takeaways will follow next week.
AI silicon snake oil of the week
First posted on Tuesday, this video of the world’s first AI-powered head transplant machine is an exercise in how to go viral. the video has millions of views, more than 24,000 comments on Facebook, and a content warning on TikTok for its grisly depictions of severed heads. A further convincer was a slick website with several job postings, including one for a “Neuroscience Team Leader” and another for a “Government Relations Adviser”.
It was so convincing that the bastion of journalistic integrity 😉 the New York Post wrote that BrainBridge is “a biomedical engineering startup” and that “the company” plans to perform an actual surgery within eight years.
Also, it was all fake. BrainBridge is not a real company, and the video was made by one Hashem Al-Ghaili, a Yemeni science communicator and film director who, in 2022, made a viral video called “EctoLife” about artificial wombs that also left journalists scrambling to determine if it was real or not.
Visily: I’m constantly building prototypes for apps and software. Visily has some cool features like Screenshot-to-UI.
Robert: We’ve just launched the first computer-aided translation (CAT) tool designed and still owned by a translation company. Five years in the making, Robert is an easy-to-use and fairly priced CAT tool. Check it out.
Network Hub, 300 Kensal Road, London, W10 5BE, UK
We’ll send you quarterly updates so you can see what we’re working on, what the future holds and how we’re shaping it.