A growing body of work posits that the ability to predict lies at the core of intelligence across all living systems and emerging artificial ones.
Decades of Narrow Assumptions Shattered

Decades of Narrow Assumptions Shattered (Image Credits: Unsplash)
Scientists long viewed true intelligence as dependent on specialized internal structures, such as explicit reasoning engines or symbolic logic systems.
Early machine learning approaches focused on narrow tasks like image classification or text sentiment detection. These systems approximated mathematical mappings from inputs to outputs, yielding what experts termed artificial narrow intelligence. Researchers dismissed next-word prediction in language as a mere statistical exercise, far removed from genuine cognition. Common sense, planning, and conceptual understanding appeared absent in such models. This perspective dominated AI research for years, limiting expectations for broader capabilities.
Language Models Reveal Hidden Depths
Large neural networks trained on vast text datasets upended these ideas when they excelled at diverse challenges.
Originally designed for predicting subsequent words, these models began answering complex questions, tackling math problems, passing professional exams, generating code, and engaging in fluid dialogues. The breakthrough stemmed from the richness embedded in language itself. Accurate predictions demanded knowledge recall, logical inference, everyday reasoning, and even inferences about others’ mental states. What seemed like a simplistic goal unveiled layers of cognitive prowess. This shift highlighted how prediction encapsulates multifaceted mental processes.
- Question-answering across domains
- Problem-solving in math and logic
- Code generation for practical use
- Conversational fluency with context
- Commonsense judgments in scenarios
Simulation or Substance? The Ongoing Debate
The success of these models sparked sharp divisions among thinkers.
Critics maintain that the systems only mimic intelligent behavior without deeper comprehension. Proponents counter that functional performance – consistent success in tests of knowledge and reasoning – defines intelligence more reliably than unobservable internal processes. This view echoes early arguments from Alan Turing, prioritizing observable actions over speculative mechanisms. Insisting on biological substrates or hidden understanding risks sidelining empirical evidence. The discussion now centers on whether behavior alone suffices for claims of intelligence.
A Functional Lens on Minds and Machines
Advocates like Blaise Agüera y Arcas extend this reasoning to a broader, biology-inspired framework.
Just as a kidney’s identity derives from its filtration role rather than its atomic makeup, intelligence emerges from capabilities like reasoning and adaptation. An artificial system matching these functions qualifies under this definition. Computation serves as the universal foundation for such emergence in life and AI alike. Explore these ideas further in Agüera y Arcas’s book, What is Intelligence?, complete with interactive visuals.
| Old Paradigm | New Paradigm |
|---|---|
| Requires specialized modules | Emerges from prediction tasks |
| Narrow AI as limit | General abilities from scale |
| Internal states define it | Behavior determines it |
Key Takeaways
- Prediction embeds reasoning, knowledge, and theory of mind.
- Functional performance trumps biological origins.
- AI successes signal computation’s role in all intelligence.
This evolving perspective demystifies intelligence as an emergent property of predictive computation, urging a reevaluation of minds both natural and artificial. What implications do you see for the future of AI? Share your thoughts in the comments.



