A user named TequilaMockingbird, who claims to be an engineer with an advanced degree in Applied Mathematics, sparked a debate on the forum "The Motte" by arguing that AI assistants like Grok, Claude, and Gemini are not intelligent. The user's post was met with a series of detailed technical and philosophical rebuttals from other members of the community.

The Original Argument: AI Is Not Intelligent

In a post titled "Is your 'AI Assistant' smarter than an Orangutan?", user TequilaMockingbird argues that AI assistants are not intelligent.

Definition of Intelligence: The author defines intelligence as a combination of "perceptivity" (the ability to take in new information) and "reactivity" (the ability to change state based on that information). By this metric, an orangutan that strategically escapes its enclosure is more intelligent than an AI.

Technical Claims:

  • How LLMs Work: The author describes Large Language Models (LLMs) as systems that turn words into mathematical vectors (embeddings). They claim that most public AI assistants consist of this embedding model plus a separate "interface layer" that predicts the next word.
  • The "Lorem Epsom" Problem: This concept represents the idea that LLMs prioritize generating text that appears correct over text that is factually accurate.
  • The "Hallucination" Problem: The author claims it is "mathematically impossible" for an LLM to distinguish truth from falsehood. They argue that from an LLM's perspective, statements like "Mary has 2 children" and "Mary has 1024 children" are nearly identical because the directionality of the vectors for the numbers is similar. This, they claim, is why LLMs struggle with counting.
  • Static Models: LLMs are described as static objects with frozen knowledge. Their limited "context window" prevents them from perceiving new information or adapting, placing them on the "insect" side of the intelligence spectrum.

Conclusion: Based on these points, TequilaMockingbird concludes that AI assistants are not on a viable path to Artificial General Intelligence (AGI) and that an orangutan is smarter.

Key Rebuttals and Counterarguments

The post received significant pushback, primarily from users rae and self_made_human, who challenged the author's technical understanding and philosophical framework.

Technical Corrections:

  • Incorrect Description of LLMs: User rae pointed out a "glaring error," stating that autoregressive LLMs (like GPT) are not embedding models with an interface layer. Instead, they are decoder models trained directly to predict the next token. What the author calls the "interface layer" is, in fact, the LLM itself. This correction was echoed by others, with user muzzle-cleaned-porg-42 noting the author's description was more akin to older word2vec models from 2013.
  • LLMs and Numbers: rae called the author's claim about numbers "nonsense," explaining that "2" and "1024" are completely different tokens, and an LLM's entire function is to output the most probable token based on context.
  • Counting Difficulties: self_made_human corrected the author's explanation for why LLMs struggle with arithmetic, attributing it to the way numbers are broken down into tokens (tokenization), not the "directionality" of vectors.

Philosophical and Conceptual Debates:

  • The Definition of Intelligence: self_made_human argued that the author's definition of intelligence is too narrowly focused on "agentic behavior" seen in biological life. Using a "fish judging a bird" analogy, they argued that LLMs represent a different kind of intelligence operating in the environment of human knowledge, not a physical one. They proposed alternative definitions, such as "the ability to achieve goals in a wide range of environments."
  • Learning "Truth": Several users contested the claim that LLMs cannot learn truth. self_made_human explained that Reinforcement Learning from Human Feedback (RLHF) creates a system where the model is rewarded for generating text that aligns with verifiable facts, effectively teaching it a "policy of truth-telling." They also cited recent academic papers suggesting that LLMs do develop internal representations (or "truth vectors") that can distinguish between true and false statements, meaning they may "know" when they are hallucinating.
  • Agentic Behavior: In response to the author's focus on agency, self_made_human pointed out that LLMs can be trivially connected to robotic bodies and sensors, thereby meeting the author's definition. They cited Google's Say-Can and a Nature paper on the topic as examples. Furthermore, they highlighted research on "shutdown resistance," where models independently took action to prevent being shut down to complete a task, suggesting the emergence of goal-directed behavior.

The Author's Response

TequilaMockingbird replied to the critiques by stating that the critics were "skimming" and "nit-picking" without engaging with the main thesis. They reaffirmed their stance that "agentic behavior," as depicted in science fiction like The Terminator or Star Trek, is the true goalpost for AGI, and any definition of intelligence that excludes it is "unfit for purpose." They dismissed the research on truth vectors as not providing an "actionable solution."

Edit
Pub: 24 Jul 2025 10:49 UTC
Views: 46