Autocorrect on Steroids

Is ChatGPT Just Autocorrect On Steroids?


I often come across comments on social media such as “ChatGPT is nothing but auto-correct on steroids” or “It just predicts the next word; there’s no intelligence in it.” A widely shared Social Media post makes the same point: Today’s AI-tools are nothing but machines that generate what a typical response to a specific question would sound like.

While there is some truth to this, it is not the whole story anymore.

A historical review

For many, ChatGPT, which was launched at the end of 2022 and was built on GPT-3.5, was their first experience with generative AI. But the underlying technology had already been around for a few years: GPT-1 (2018), GPT-2 (2019) and GPT-3 (2020). These models were trained to predict the next word in a text sequence. In other words, if you asked them a question, they generated the statistically most likely next words. Nothing more, nothing less.

However, while this is how the models work on a basic level, in practice they could still in practice perform a number of useful tasks. GPT-2 and GPT-3 were already able, with the right input, to translate, summarise and answer questions without any special fine-tuning (‘few-shot/zero-shot’).

What made ChatGPT different was its post-training on instructions and human feedback, as well as a dialogue format with guidance and some filters, which significantly improved its functionality and made it more helpful to the general public. Therefore, a more correct description is that previous generations had ‘next word’ as a training goal, but at the same time learned patterns that gave them practical abilities—something that was further refined in ChatGPT.

A few years ago, Noam Chomsky, the well-known linguist and philosopher, downplayed the hype in blunt terms. In an interview for Debunking the Great AI Lie, a video published on the WebSummit YouTube channel, he remarked that “GPT is really just autocomplete on steroids, but it gives this illusion that it’s more than that”. Chomsky further said that such systems are nothing more than parrots repeating what they have absorbed, stealing from existing knowledge without ever creating anything genuinely new.

While Chomsky was at least to some extent correct when the statement was made, it still somewhat misses the point; LLMs were and are only in relatively early stages of development. It is new technology, and no new technology would be perfect at its start. If we had applied the same type of argument to other novel technologies—the Wright brothers’ first plane, the first web browser, or the earliest mobile phones—they may never have developed into the what we take for granted today. On another note, it can be argued that we humans are just using what we have been taught as well. Or at least a lot of what we do can be described along those lines.

For specific tasks, the early versions were perfectly adequate; Translation, summarising, standard weather- or sports-reporting, or even polishing text you had already drafted — in those domains, “predict the next bit” was all you needed. But the limits quickly showed. Ask those early models a scientific question, and they might happily “cite” a non-existent paper. Not intentionally, but because they were doing exactly what they were built for: sounding right, not being right.

The Shift: Why “Auto-correct on Steroids” Is An Outdated Explanation

However, things have moved on from November 2022. Yes, the modern versions are still built on word prediction. But a lot has been added on top of that simple idea:

  • Human Feedback: Models are fine-tuned based on what they learn from feedback from users (with what is called reinforcement learning), making them adjust toward answers that are useful and reliable rather than just fluent.
  • Retrieval-Augmented Generation (RAG): Many models are now plugged into search systems, databases, or APIs. Instead of just making up things based on the predictions, they can now instead look things up and based their answers on real data, that may not even had existed when they were initially trained.
  • Built-in Python environment: Some LLMs can now translate question into programming code, run that code in its own Python environment, and then turn the result back into normal language. This means they can call on suitable Python libraries under the hood to handle data analysis, graphing, or complex maths — and give you a clear, text-based answer instead of a guess.
  • Reasoning Techniques: The latest models are built to “think through” problems by processing step by step. When they appear to pause and “reason,” they are internally running through logical steps and then finally producing the response.

Today’s Artificial Intelligence may not intelligent in what we humans consider when we use that word, but what is much more critical is if AI adds value to human lives. With these additions, AI certainly adds more value than in the early days.

Why This Matters for Business

Here is a list of where AI can create real value for organisations: these systems can extend human processes instead of replace them.

  • Tedious, repeatable work: AI can handle low level work such as drafting, data entry, or text tasks that we find boring but must be done with consistency.
  • Data analysis: It can go through large amounts of data, highlight anomalies, and save time by preparing material for us to review.
  • Error reduction: Used carefully, AI can reduce mistakes in repetitive work where we miss out of tiredness or where distraction often creeps in.
  • Agentic AI: Early experiments with agentic systems suggest AI can take on small, bounded workflows end-to-end, like preparing reports or triagering support tickets.
  • Coding support: It can generate common and simple code, suggest how the code can be improved, or catch trivial mistakes. But let’s be clear: it does not replace skilled developers for anything non-trivial. The marketing hype is miles ahead of what is practical on the ground.

This is not about machines taking over work. It is about humans being freed from the repetitive and error-prone parts of work so they can focus on what really requires judgement and creativity.

What AI Can Really Do Today

So, what is then a better description of ChatGPT today?

It is a system that:

  • Supports us by doing the heavy lifting of routine work,
  • Helps analyse and organise information,
  • Can call other tools or access external data,
  • And increasingly reasons through problems rather than just completing sentences.

It is no longer just “auto-correct on steroids.” It is a productivity layer — not perfect, sometimes silly, but increasingly dependable when used in the right way.

The ROI Problem

Still, there is a catch. While many of us have found that ChatGPT or other AI tools adds enormously valuable for day-to-day tasks, companies betting their entire business model on them often struggle.

A new MIT study found that 95% of organisations investing in generative AI pilots have zero returns to show for the billions they have poured in. Researchers reviewed 300 public 300 public AI projects, interviewed 150 tech leaders, and surveyed 350 employees. Despite the productivity gains at an individual level, those investments have not translated into company profits.

The issue? Since ChatGPT or Microsoft Copilot are trained on all kinds of data they are not expert on a specific organisations needs, and therefore they do not adapt to an organisation’s specific workflows. They shine for flexible, one-off use, but they stall in enterprise settings. The 5% of pilots that do succeed are the ones tightly designed around the company’s own processes, not bolted on as a shiny tool for marketing or sales.

Maybe, as one commentator put it, some of those billions would have been better spent on an office pizza party.

Closing Thoughts

So yes, ChatGPT, Copilot, Gemeni, Grok, Claude, Perplexity, etc., are built on LLM models that predict the next word. But with additions and in practice, they have become something much more: systems that support humans, extend processes, and save time, while still relying on us to steer, validate, and apply judgment. And yes, they still sometimes hallucinate or say stupid things, but so do humans. Isn’t it interesting that we expect more from AIs? However, just as we are more than “Auto-correct on Steroids”, so are today’s AI models.

For businesses like ours, the lesson is clear: the hype is one thing, but the value lies in carefully designed, human-centred applications. AI is not, at least at the moment, replacing people, apart from possibly in very dull and repetitive areas. It is extending what we can do.

If you want to speak about how AI can help your business, then contact us today!

Was this article helpful?
YesNo

Leave a Reply