The Intelligence Illusion

Second Edition

Why generative models are bad for business

Buy the ebook for $35 USD

“I’ve been hearing a lot about ChatGPT. Sounds like it could help us a lot. Can you look into that?”

“We don’t want to fall behind.”

Your boss has questions about AI, and you need to have answers

Most of the hype is bullshit. AI is already full of grifters, cons, and snake oil salesmen, and it’s only going to get worse.

But, you can’t just say that to your boss.

All they’re seeing are the promises.

In a credibility contest between you and the CEOs who get prima donna interviews on 60 minutes promoting the greatest invention since the steam engine, the prima donnas will win.

Generative Models: It’s All Bad Parts

  • If you want to steer your work away from generative models, you will need to understand the corse risks posed by generative models to most businesses.
  • You have to prevent them from alienating the customers with random AI features nobody asked for, or scuttling customer service with untested chatbots.
  • The web and social media is full of either doom or hype. Neither looks useful. A lot of it doesn’t even make sense.
To prevent your boss from stepping into a bear trap, you need to be able to spot the trap.

Become the local expert in generative model hazards

Your work can’t ignore generative models forever. Somebody needs to step up and be able to explain exactly why adopting them will be harmful to your business, without spending a decade studying mathematics and programming.

Imagine being able to answer these questions and more, with concrete reasons why:

  • What few things do these systems actually do well?
  • How are they currently broken?
  • What risks does this pose to businesses?
  • How, specifically, might their use harm your specific business?
  • Why aren’t these AI chatbot things as clever as they say they are?
  • Why can’t we productively and safely use them?

Provide answers, explanations, and recommendations with references.

All without resorting to “just because” or “it feels iffy”.

You can understand generative models—why they don't work—but where to start?

You start here, with this book

Cover for the book 'The Intelligence Illusion'

If you want philosophical musings, then there plenty of books and articles out there debating the social, cultural, or even existential risks of AI.

It’s easy to find breathless articles promising miracles or catastrophe.

This book is different

It details, in depth, the risks that come from using generative models at work, with approachable high-level explanations of the flaws inherent in its design.

The Intelligence Illusion (Second Edition): Why generative models are bad for business is specifically written with you in mind, attempting to answer the question:

How does this affect your work?

Buy the ebook for $35 USD

How the book is structured

This book is divided into four overall parts:

  1. Introduction. Being an overview of generative models, their basic function, what they can and can’t do, as well as how they can exploit our own biases.
  2. Recommendations. Overall recommendations for what to do and not to do. These are based on the next section.
  3. Risks. This is an inventory of the risks that are specific to businesses and work. They don’t cover political or social issues. Just those that could affect work. Together, they should demonstrate exactly why generative models are bad for business.
  4. What next? This section looks at where things are and where things are heading.

You don’t have to read them in order. I tried to minimise cross-references throughout the book so you shouldn’t have to read the chapters in strict linear order. If your biggest concern is how AI and tech is currently evolving, then you should start with the fourth section. If you just want the recommendations, start with the second. If you’re worried about the risks, start with the third. If you want to be depressed, read the whole thing from beginning to end.

The book will help you see the many ways in which this technology is flawed—even broken—how many of its defects are integral to how it works, and why they won’t be fixed any time soon.

  • Many of the promises being made by AI system vendors are unsubstantiated and unrealistic.
  • The claims are very often misleading.
  • Many these systems and their intended use cases are irresponsible, if not outright dangerous.
  • Even where it’s useful, it comes with risks that undermine its usefulness

Written for non-experts, but with deeper research than most AI expert writing

The second edition 290 pages, or forty-two thousand words, 40% longer than the first edition. The list of references alone is 38 pages. It’s written in non-technical English to make sure it’s accessible to those of us who aren’t fluent in AI techno-babble.

I went so far as to make my parents read the book and quiz them afterwards to make sure that it was relatively free of jargon.

I read hundreds of studies and papers for the book and countless articles. The only other time I’ve gone this deep into researching a single topic was for the PhD I did twenty years ago.

Every risk, recommendation, and analysis in the book is backed by references.

No hype; no bullshit

At the end of the book, you will be fully equipped to keep up with developments in generative models. You will have formed your own idea of how effective it’s going to be for you, at your job, in your business.

And you’ll be able to talk your boss out of making an “AI” mistake if you need to.

Buy the ebook for $35 USD

Praise for the first edition of The Intelligence Illusion

Amid the current AI tsunami, I knew it was time to do my share of learning this stuff by filling gaps among pieces of my fragmented knowledge about it and organize my thoughts around it. This book served that purpose very well.

Generative AI, ChatGPT and sorts, have been released prematurely with too many, too much downsides for possible benefits. It’s especially so when it comes to commercial use. This book walk you through possible risks your business might encounter if you casually incorporate it.

Many of his arguments opened my eyes. I’m glad I found his book at this timing. It’s a hype, at least for now and a foreseeable future. Use cases will likely be very limited. And to protect ourselves from bad actors, we need solid regulations, just like in the case of crypto.

yasuhiro yoshida 吉田康浩

The Intelligence Illusion is full of practical down-to-earth advice based on plenty of research backed up with copious citations. I’m only halfway through it and it’s already helped me separate the hype from the reality.

Jeremy Keith

Should we build xGPT into our product?

Before you answer, make sure to take advantage of all the homework Baldur Bjarnason has done for you.

Brandon Rohrer

Back when I worked in publishing, I employed Baldur Bjarnason as a consultant on complex digital publishing projects, as I appreciated his ability to grasp technical detail and translate it into terms that a general manager could understand - and act upon. He has applied that same skill to a superb new ebook on the business risks of generative AI, which I was lucky enough to read in advance of publication. It combines deep research, logical analysis and clear business recommendations.

George Walkley

I bought it, and I read most of it (skimming the middle part), and it is brilliant. Thank you!

Matthias Büchse

I just bought the book this morning and it’s exactly what I needed. I have not seen a clearer description of how generative AI works, what it might be good for, and what the risks are. The references alone are worth the price of the book.

Dave Cramer

When it comes to the current hype surrounding AGI and LLMs, whether you’re a true skeptic (like me) or a true believer, The Intelligence Illusion is a splash of lemon juice in the greasy pool of incredulous media coverage. Accessible for anyone who’s spent more than 15 minutes with a clueless executive or myopic developer (or, frankly, engaged with any of the technological “disruptions” of the past two decades), Bjarnason rigorously unpacks the many risks involved with the most popular use cases being promoted by unscrupulous executives. He brings plenty of receipts to support his observations, too, while also spotlighting areas where this technology might have legitimate potential for good. Highly recommended!

PS: The images throughout do an amazing job of subtly reinforcing the book’s title and premise and would be worthy of a print edition.

Guy LeCharles Gonzalez

Buy The Intelligence Illusion (Second Edition)

Get the four-book bundle of all my ebooks, The Intelligence Illusion, Out of the Software Crisis, Yellow, and Bad Writing, in PDF and EPUB, for $49 USD together, a 45% discount off the combined price.

Buy The four-book bundle The Intelligence Illusion +
Out of the Software Crisis +
Yellow +
Bad Writing and Other Essays
$49 USD

Or, buy The Intelligence Illusion (Second Edition): Why generative models are bad for business in PDF and EPUB for $35 USD.

Buy The EbookThe Intelligence Illusion
$35 USD

Excerpts from the book

From Artificial General Intelligence and the bird brains of Silicon Valley

Because text and language are the primary ways we experience other people’s reasoning, it’ll be next to impossible to dislodge the notion that these are genuine intelligences. No amount of examples, scientific research, or analysis will convince those who want to maintain a pseudo-religious belief in alien peer intelligences. After all, if you want to believe in aliens, an artificial one made out of supercomputers and wishful thinking feels much more plausible than little grey men from outer space. But that’s what it is: a belief in aliens.

(Read the full chapter online)

From Beware of AI pseudoscience and snake oil

Even the latest and greatest, the absolute best that the AI industry has to offer today, the aforementioned GPT-4 appears to suffer from this issue where its unbelievable performance in exams and benchmarks seems to be mostly down to training data contamination.

When its predecessor, ChatGPT using GPT-3.5, was compared to less advanced but more specialised language models, it performed worse on most, if not all, natural language tasks.

There’s even reason to be sceptical of much of the criticism of AI coming out from the AI industry.

Much of it consists of hand-wringing that their product might be too good to be safe—akin to a manufacturer promoting a car as so powerful it might not be safe on the streets. Many of the AI ‘doomsday’ style of critics are performing what others in the field have been calling “criti-hype”. They are assuming that the products are at least as good as vendors claim, or even better, and extrapolate science-fiction disasters from a marketing fantasy.

The harms that come from these systems don’t require any science-fiction—they don’t even require any further advancement in AI. They are risky enough as they are, with the capabilities they have today.21 Some of those risks come from abuse—the systems lend themselves to both legal and illegal abuses. Some of the risks come using them in contexts that are well beyond their capabilities—where they don’t work as promised.

(Read the full chapter online)

From AI code copilots are backwards-facing tools in a novelty-seeking industry

These two biases combined mean that users of code assistants are extremely likely to accept the first suggestion the tool makes that doesn’t cause errors.

That means autocompletes need to be substantially better than an average coder to avoid having a detrimental effect on overall code quality. Obviously, if programmers are going to be picking the first suggestion that works, that suggestion needs to be at least as good as what they would have written themselves unaided. What’s less obvious is that the lack of familiarity—not having written the generated code by hand themselves—is likely to lead them to miss bugs and integration issues that would have been trivially obvious if they had typed it out themselves. To balance that out, the generated code needs to be better than average, which is a tricky thing to ask of a system that’s specifically trained on mountains of average code.

Unfortunately, that mediocrity seems to be reflected in the output. GitHub Copilot, for example seems to regularly generate vulnerable code with security issues.

From the book’s finale

The intelligence illusion, the conviction that these are artificial minds capable of powerful reasoning, when combined with anthropomorphism supercharges our automation bias. Our first response to even the most inane pablum from a language model chatbot is awe and wonder. It sounds like a real person at your beck and call! The drive to treat it as, not just a person, but an expert is irresistible. For most people, the incoherence, mediocrity, hallucinations, plagiarism, and biases won’t register over their sense of wonder.

This anthropomorphism-induced delusion is the fatal flaw of all AI assistant and copilot systems. It all but guarantees that—even though the outcome you get from using them is likely to be worse than if you’d done it yourself, because of the flaws inherent in these models—you will feel more confident in it, not less.

Buy the ebook for $35 USD

Table of Contents

INTRODUCTION

  1. Generative models are bad for business
  2. Some definitions
  3. Generative models are data
  4. The bubble makes generative models too risky for business
  5. AI and the bird brains of Silicon Valley
  6. The Intelligence Illusion

THE RECOMMENDATIONS

  1. “AI” is an industry dominated by bullshit and snake oil
  2. Avoid large and general-purpose generative models
  3. Strengthen your defences against fraud and abuse
  4. Do not give customers direct access to a generative model
  5. If you have to, use generative models mostly to modify

THE RISKS

  1. Are they breaking laws or regulations?
  2. Generative model copyright issues
  3. Much of the training data is biased, harmful, or unsafe
  4. Generative models fabricate and don’t do facts
  5. Much of the output is biased, harmful, or unsafe
  6. The Mediocrity Trap
  7. Shortcut reasoning
  8. Code quality

WHAT NEXT?

  1. The Tech Industry took a wrong turn
  2. Cracks and shadows
  3. The era of centralised compliance engines has begun
  4. Further reading
  5. References
  6. End notes

Buy the ebook for $35 USD

Refund Policy

If you aren’t happy with the purchase, I’ll give you a refund, no questions asked, even beyond the EU’s 14-day statutory requirement, up to a year from purchase.

The Author

Baldur Bjarnason

My name is Baldur Bjarnason.

I’m an independent scholar and journalist who writes about technology and software. I’ve been a web developer for over twenty-five years and continue to take on projects as a software development consultant and researcher. My clients in the past have included companies small and large, startups and enterprises, and both not-for-profits and for-profits.


Terms of Use

Mastodon