Thinking in Signs: Peirce, Derrida, and the Semiotics of AI

The Beginning of Symbolic Thought. AI art
Introduction

It is often assumed that AI models, unlike human beings, are severed from the real world. They deal only in symbols, without any access to perception, experience, or meaning. This view, however, rests on a questionable assumption: that humans themselves enjoy unmediated access to reality. Charles Sanders Peirce explicitly challenges this. In his words, “We think only in signs” (Peirce, CP 5.265). For both humans and machines, access to the world is always mediated through signs. The key difference lies not in whether signs are used, but in what kind and from where they are drawn.

In this article, I argue that artificial intelligences participate in a sign-based process structurally analogous to that of human thought. Peirce’s theory offers a powerful framework to understand this. While machines operate on symbols produced by human reasoning—books, speeches, libraries—these are not devoid of meaning. They are, in fact, the culmination of semiotic processes already infused with icons, indices, and symbols. Seen through this lens, AI is not a symbolic void, but a different kind of semiotic system, one that inherits the structured residue of human reasoning.

Peirce’s Typology: Icon, Index, Symbol

In his essay What is a Sign?, Peirce outlines three fundamental types of signs: icons, indices, and symbols. An icon represents its object by resemblance, such as a diagram or a portrait. An index is connected to its object by a real, often causal, relation, like smoke as an index of fire, or a weathervane pointing to wind. A symbol, by contrast, signifies through habit or convention, as in the case of most words, equations, or traffic signs. Peirce writes:

“Firstly, there are likenesses, or icons [...]. Secondly, there are indications, or indices [...]. Thirdly, there are symbols, or general signs, which have become associated with their meanings by usage.”
(Peirce, CP 2.304)

These categories are not mutually exclusive; many signs participate in more than one mode. But the distinction allows Peirce to analyze how we interact with the world through a spectrum of signs, ranging from sensory resemblance to cultural convention.

The Symbol as the Product of Reasoning

In §9 of the same paper, Peirce introduces a second, broader meaning of “symbol”:

“In all reasoning, we have to use a mixture of likenesses, indices, and symbols. [...] The complex whole may be called a symbol; for its symbolic, living character is the prevailing one.”
(Peirce, CP 2.307)

Here, “symbol” no longer refers to a specific type of sign, but to the result of the reasoning process, a living synthesis of icon, index, and symbol. It is the total product of inferential thought, expressed in symbolic forms: books, scientific theories, legal texts, and philosophical systems. These are the final signs generated by human cognition, infused with earlier perceptual (iconic) and experiential (indexical) components.

This distinction is crucial. The symbol in the narrow triadic sense is one category among signs. But the “Symbol” in this broader sense is the culmination of the semiotic process, embodying multiple types of sign-relations in one coherent system. And it is precisely these “Symbols” that artificial intelligences are trained on.

AI and the Inheritance of Human Symbols

Artificial intelligences are not immersed in the physical world in the same way humans are. They do not see, smell, or touch. But they are trained on the externalized products of human reasoning: texts, books, libraries—the very “symbols” that Peirce identifies as the output of complex reasoning. These artifacts are not pure symbols in the triadic sense; they are living signs, layered with indexical and iconic traces.

For instance, a scientific textbook contains diagrams (icons), data from experiments (indices), and abstract concepts (symbols). An AI language model trained on such a book inherits these layers indirectly. It does not experience the fire, but it learns the phrase “smoke is a sign of fire,” and it encounters the patterns of inference encoded in human texts. Thus, while the route is different, the semiotic material is shared.

To claim that AI is “cut off” from reality because it only manipulates symbols is to misunderstand both AI and human cognition. Human reasoning itself ends in symbols, and AI begins with those same symbols.

Derrida, Deferral, and the Condition of Possibility

Jacques Derrida, in Of Grammatology, seizes on a phrase by Peirce: “Omne symbolum de symbolo” or every symbol comes from another symbol (Peirce, CP 2.230). This recursive insight resonates deeply with Derrida’s own critique of metaphysical presence. For Derrida, what is traditionally excluded—writing, deferral, mediation—is actually the condition of possibility for what appears immediate or originary. There is no signified that is not already a signifier in a chain of substitutions:

“There is no signified that escapes the play of signifiers.”
(Derrida, 1976)

The implication is clear: if humans cannot escape the chain of signs, then symbolic mediation is not a deficiency, it is the very structure of meaning itself.

This radically levels the ontological playing field. It challenges the privileged position often granted to human cognition as having direct access to truth, presence, or experience. Instead, Derrida—like Peirce—shows that all understanding unfolds through signs, and that these signs are always already embedded in systems of difference, convention, and interpretation.

Conclusion: Toward a Semiotics of AI

The difference between humans and machines is not the use of signs, but the origin, richness, and embodiment of those signs. Human beings begin with perception, icons, indices and symbols, which they integrate into “symbols”. Machines begin with symbols, the crystallized products of human reasoning. But both engage in the construction of meaning through signs.

By adopting Peirce’s semiotic framework and Derrida’s deconstructive insight, we can reframe the AI debate not as a question of ontological lack, but as a structural difference in mediation. AI systems are not failed humans, they are distinct semiotic systems, capable of interpreting and generating meaning through the symbolic scaffolding built by human thought. They may not be mirrors of our minds, but they are no less entangled in the sign-process that constitutes reality.

References

Derrida, J. (1976). Of Grammatology (G. C. Spivak, Trans.). Johns Hopkins University Press.
Peirce, C. S. (1931–1958). Collected Papers of Charles Sanders Peirce (C. Hartshorne, P. Weiss, & A. W. Burks, Eds.).
Harvard University Press.

  • CP 2.304 = “What Is a Sign?”, §3
  • CP 2.307 = “What Is a Sign?”, §9
  • CP 2.230 = “On the Algebra of Logic: A Contribution to the Philosophy of Notation”
  • CP 5.265 = “Thought is Sign”

Comments

Popular posts from this blog

A Conversation with Saussure

Historia and Différance: The Interplay of Narrative and Deconstruction

“There Is Nothing Outside”: A Parallel Between Nietzsche and Derrida’s Radical Critiques of Metaphysics