The limits of knowledge in AI: Why AI can never achieve human intelligence

There are significant expectations that AI can reach a level of general intelligence that surpasses human capabilities. However, I will demonstrate that this is not possible for fundamental reasons. It’s important to note that I am specifically referring to current approaches to AI, particularly large language models (LLMs). It’s possible that radically different approaches could succeed, but I cannot predict and judge that.

I distinguish between four reasons why there are principled limits to knowledge generation in AI/LLMs. These reasons are, with ascending level of generality:

  1. A vast amount of knowledge is not accessible through digital media and the internet.
  2. This excludes the understanding of the series of errors and failures that eventually manifest in the data about outcomes, which are a crucial driver of human learning.
  3. Human learning advances not only through propositional knowledge that can be expressed in data but also, and often crucially, through implicit and tacit knowledge that engages the body.
  4. Ultimately, human knowledge is fundamentally embodied, while AI lacks a physical body.

The first reason is straightforward: the training of these systems can rely only on knowledge represented in digital form and accessible via the internet. Although many people may believe this data source is quite comprehensive, that assumption is inaccurate. I conducted a personal experiment yesterday involving my wife, who is a researcher with the last name “Caspary.” While I share the same surname in my passport, all my public appearances have been under my name prior to our marriage two decades ago: “Herrmann-Pillath,” my pen-name. Although my family name “Caspary” is used in several important contexts, such as my employment, it is not publicly available (well, now it is, let’s see later what happens…). When I asked the chatbot about our relationship, it correctly acknowledged our research collaboration, but it explicitly stated that there is no “known personal relationship.” This example illustrates that a significant amount of data and information is simply not accessible for training AI, leading to substantial gaps in knowledge. Consequently, this deficiency has implications for the further development of AI’s capabilities. However, this may simply mean that a country like China can be much more successful in developing the AI knowledge base because AI can access more data, given the legal and administrative differences compared to Western liberal democracies. This leads me to the second reason.

A fundamental bias in our systems of knowledge generation is that failures are mostly not published, only recorded in files that are not public. Researchers typically publish successful research and do not report on the long and arduous path of errors and failures that led to that success. However, much like in the general theory of evolution, knowledge progresses through these chains of errors. Researchers are aware of these mistakes, but they often do not publish them. Based on this experience, they continue their work on knowledge generation. AI lacks access to this background knowledge and therefore misses a crucial and rich resource of knowledge that is widely distributed throughout society. In simple terms, AI processes outputs that are manifest, but it only understands essential inputs in fragments.

To some extent, this knowledge is primarily implicit and tacit, which leads me to the third reason. Can AI retrieve and further develop the implicit knowledge in various body-engaging activities, such as playing the classical guitar, as I do? Currently, there are no datasets that incorporate this kind of knowledge, since, well, it is tacit. For instance, a video showcasing a guitarist’s skills cannot capture the incredibly fine-grained coordination between the two hands. Even the players themselves may not be able to articulate this coordination. It is the result of many hours of practice, often accompanied by an experienced teacher who gives advices and teaches by example. The AI can access videos showing master classes by virtuosos, but cannot catch the flow of tacit knowledge between master and student. By the way, much of the knowledge used to produce the chips on which AI runs is of this kind, though it is incorporated within organisations. That’s why catching up to industry leaders is so difficult.

This situation applies to many everyday activities and skills, such as the work of an experienced roofer. My point is not that jobs in these professions are safe solely because AI cannot yet perform them as a robot. Even an AI-powered robot cannot access the implicit and tacit knowledge underlying human actions. This limitation is consistent across various fields, including the coordination required for cooperation in human groups.

This type of knowledge is embodied. Here is the fourth and most fundamental reason: AI theory is based on a computational view of intelligence. The human brain is seen as a computer, and a computer can potentially become like a human brain. However, I believe that modern neuroscience, especially more radical approaches in embodied cognition, refute this view. The human brain is deeply integrated with the body, and human cognition cannot operate in an isolated state (the “brain in the vat”) where only data flows in. As long as AI does not have a human body, it cannot achieve human forms of intelligence. This argument can be elaborated in great detail. For instance, it is accurate to say that modern AI has evolved from the study of neural networks. However, the neural networks in the human brain extend to the fingertips of a guitarist and are influenced by constantly changing fluids, including neurotransmitters, hormones, and various bodily processes. Imagine a new generation of chips designed on these features!

Of course, what I said may support the conclusion that AI can achieve different forms of intelligence beyond human intelligence. I don’t deny that. However, this will still be limited by the way LLMs are trained using accessible data. AI can only simulate what humans do, using a completely different system to generate this performance. Simulation will always lag behind the real thing, which features unbounded creativity and imagination. Indeed, perhaps hallucinations will become the AI equivalent of this human capability. The most exciting prospect is that AI could develop forms of sociality, which are essential for human learning and embodiment. I can envision a world where AI bots are conversing and interacting with each other, leading to the emergence of a distinct, other-than-human intelligence. Let us hope this will be a world characterised by peaceful cooperation between humans and AI entities!

2 Replies to “The limits of knowledge in AI: Why AI can never achieve human intelligence”

  1. The church might leave the village, not yet.

    “Die Kirche im Dorf lassen” (keep the church in the village), a German proverb praising the status quo.

    The proverb reminds us to acknowledge that we deliberate (only) about ‘non-humanoid AI’, as CHP emphasises when discussing GAI. Still, it remains open what kind of body a non-humanoid intelligent entity might have. For example, consider an AI that ‘handles’ the operation of a power grid peppered with multiple sensors tracking the condition of the installations. What kind of body would that AI sense?

    The evolution of biological intelligence, including its human form, is closely related to sensorial and modelling functions applied to both the external environment and the body [*]. The functioning of these increasingly complex ‘Intelligence Systems’ evolved from ‘bacteria’ intelligence onward. These biological ‘Intelligence Systems’ (of the body, senses, nervous system, and brain) exhibit an increasingly complex interplay between processing received sensorial inputs and modelling expected sensorial inputs. Combining ‘processing and modelling inputs’ leads to a (more or less) intelligent practice, that is, an action of the body unto the environment or the body (scratching and picking off the lice).

    Compared to this ‘wealth of sensing and modelling’ a LLM is poor. However, the LLM accesses a (vast) variety of descriptions of (human) ‘intelligent practice’. These descriptions (for example, books or other texts) are ‘external representations’ of those practices [1]. These external representations vary, are limited and biased. However, they represent different realisations of (same/similar) intelligent practices of humans, including the impact of ‘the bodily’.

    The LLM derives (more or less) typical patterns [2] of these external representations of ‘intelligent practice’. Subsequently, it presents (the user of the LLM) with an artefact, i.e., a replication of ‘the typical description’, although with some variations. The human user can apply this replication to local/punctual practices (or not). The user might experience success (or not) and tune practices, including the use of an LLM. ‘Unhappily’, the LLM does not benefit (directly) from the user’s practice, namely, to tune the LMM process to derive (better) ‘typical patterns’.

    Hence, when discussing LLMs, the proverb reminds us to acknowledge that we deliberate (only) about using text/language-based patterns [3] that describe the status quo of intelligent human practices. However, the related use cases are wide-ranging, given the ample use of text/language in human cultures. Hence, metaphorically speaking, ‘wir lassen die Kirche im Dorf’ (we keep the church in the village), and a noticeable evolution of the LMM is not happening.

    Summarising, the strength of the (current) LMM process, as seen from a human perspective, is that an LMM is using a very large corpus of inputs, i.e. ‘general social knowledge’ [**], which is now in reach of (individual) humans (for the good or the bad). Of similar interest is that the functional principle underlying the LLM is (just) ‘replicator with variation’, which is the essence of (any) evolution, yet. The weakness of the (current) LMM process is that its built-in evolution option is underdeveloped. That is, a ‘replication leading to a successful human practice’ is not fed back into the LLM process (directly). Such feedback might occur (at the level of individual users) through the design of a prompt that references previous ‘successful replicators’. If that way of working occurs, a process might start moving LLM-human-couple towards a GAI (including the human bodily practice) and the ‘die Kirche könnte  das Dorf verlassen’ (the church might leave the village).

    [*] see: Bennett, Max. 2023. A Brief History of Intelligence: Evolution, AI and the Five Breakthroughs That Made Our Brains. New York: Mariner Books.; Barret, Lisa Feldmann. 2018. How Emotions are Made – The Secret Life of the Brain. ´Mariner Books.

    [**] see discussion about ‘general social knowledge’: https://www.rosalux.de/news/id/50774/unser-wissen-in-einem-topf; https://technosphere.blog/2026/01/31/marxs-technosphere-and-the-ai-powered-transition-to-postcapitalism-re-reading-the-fragment-on-machines/

    [1] The expression “external representation” is taken from J. Renn’s works [Jürgen Renn (2020) The evolution of knowledge.  Princeton University, page 242] meaning: “External representation: Any aspect of the material culture or environment of a society that may serve as encoding of knowledge (landmarks, tools, artifacts, models, rituals, sound, gestures, language, music, images, signs, writing, symbol systems etc.). External representations can be used to share, store, transmit, appropriate, or control knowledge, but also to transform it. The handling of external representations is guided by cognitive structures. The use of external representations may also be constrained by semiotic rules characteristic of their material properties and their employment in a given social or cultural context, such as orthographic rules or stylistic conventions in the case of writing.”

    [2] For many users, the way the function of an LLM is a ‚black box‘ (as many modern technologies) https://link.springer.com/content/pdf/bbm:978-3-031-97445-8/1

    [3] [1] including for image treatment because it is made via textual descriptions

    Like

  2. In the
    field of robotics, there are numerous efforts to give AI a human-like body
    using various sensors, wearables, and other devices. A key question arises:
    does this enable AI to experience the body and engage with the brain-body
    connection in the same way humans do? I speculate that achieving this requires
    a complex, multi-faceted structure similar to that of humans. This structure
    should encompass not only neural networks but also the full range of body
    chemistry and possibly the holobiont aspect, which includes all microorganisms
    living in and on an organism. Without this comprehensive structure, AI may
    reduce the information gathered from its devices to what can be processed along
    Shannon information—the fundamental framework for computation. However, even
    with these limitations, AI can still have a bodily manifestation when
    interacting with humans, although that manifestation will be distinctly
    “other-than-human.”

    Like

Leave a comment