As I look around the co-working space around me, ‘AI’ is everywhere. To my left, someone on Zoom discussing the future of AI powered financial advice. To my right, two people pairing. Pairing on unpicking the mess generated by code generation. Directly in front of me, someone with two laptops. On one, Claude Code is burning through tokens while on the other they watch a show on Netflix.
I’ve found myself increasingly conflicted about Artificial Intelligence (AI). There appear to be two camps; the AI fanboys and the AI haters. Forced to pick one, I’d be closer to a hater than a fanboy. But I’m not forced to pick. My thoughts on AI are more nuanced. Not only that, they vary constantly. I’ll be honest, I find it difficult to avoid falling victim to recency bias, over-indexing on the most recent thing I’ve read. Unfortunately exposing myself to both extremes hasn’t helped me develop my own nuanced position. It’s left me feeling exhausted, bouncing around without settling. What do I think about AI?
This blog post is not meant to convince you to take a position, it is a blog post for me to try and document my thoughts as they are today. I write this knowing that my opinions will change. I know that there are many out there who will disagree with my position, including those I would turn to for advice. As you read I ask one thing, be open. I’m on a journey of discovery, I may end up agreeing with you, or I may not. For me what matters is we both arrive at our respective positions.
What is AI
What we mean by “AI” is opaque at best. It stands for Artificial Intelligence, but I don’t get the impression that we know what we mean by intelligence in this context. I’ve settled on the term “AI” being a marketing gimmick, used to generate excitement and interest. Start-ups use the term to secure funding, vendors use it to justify price increases, individuals use it to sound on-trend. This blog post is a perfect example.
At a technical level, it could be argued that “AI” refers to a broad category of techniques for getting computers to complete tasks without explicit instruction. The majority of conversational use of the term “AI” refers to one specific approach to this; the large language model (LLM) and the associated agent (or App) that maintains context and wraps the LLM in a run loop.
My beliefs
- Large language models are models of language.
- Models trained on (images, videos, etc.) are still models.
- A model is, by definition, an imperfect representation.
- There is a difference between statistical likelihood and understanding or intelligence.
- A large language model is not intelligent.
- There is potential do to intelligent things with a large language model.
My observations
- It is difficult to comprehend the scale of the models that exist today.
- Conversational fluidity gives the illusion of intelligence.
- Clever use of language can mask a lack of intelligence.
- Blindly accepting the output of a large language model as fact is fraught with risk. But yet we do it anyway.
- The ChatBot is a mirror. Unsurprisingly, the models are a reflection of your input. You get out what you put in.
- The latest tools are exciting to use. They feel like magic.
- We’ve shifted from “computer says no” to a world where it is now possible for the computer to try, fail, and then have a best guess anyway. It’s unclear if this is a good thing.
My fears
- Being left behind - my hesitancy to become an AI believer and turn my life over to the agents has left me feeling like I might be left behind.
- AI sounds great when it is free. But there is a very real cost. The VC funding can’t go on for ever. AI companies are a long way from financial stability.
- The incestuous financing arrangements between frontier AI companies and their infrastructure providers feel like they are one wobble away from coming crashing down.
- We don’t understand the impact AI use (and AI marketing) is having on society. Whether useful or not, the AI hype is challenging education, hope, motivation and ambition for all.
- A growing class divide between those who have access and those for who are denied. If AI is useful, who does it serve? Those that have wealth or those who don’t?
My Curiosity
In an effort to evolve my own thinking, I’ve been exploring a few aspects that I’m particularly curious about:
- Small, use-case specific models (see Elijah Potter)
- What, if any, use-cases show promise (see Ben Boyter)
- Education, how do we explore the role of AI in education (see Ploum)
- What are the unintended consequences? (see GitHub regret)
Where next?
My position on AI will continue to evolve. I will continue to learn through experimentation and curiosity led discovery. I will try my best to avoid taking the hype at face value. I will not pass off generated content as my own. I will be more vocal about my position whilst accepting that my position will change.
How are you navigating the chaos? I’d love to exchange ideas and challenge each other.