From "Openness" to Image: AI, Generative Models, and the Logic of "Closure"
AI-generated images have rapidly become part of everyday visual culture. Portraits of people who never existed, landscapes that resemble photographs yet depict no place, and scenes that dissolve under scrutiny circulate widely, provoking fascination and unease. These images are often discussed in terms of imitation, realism, or deception. Do they copy the world? Do they distort it? Or do they mark a break with human meaning altogether?
Another way of approaching these questions emerges from Hilary Lawson’s Closure: A Story of Everything. Rather than asking what AI images represent, Lawson’s framework encourages us to ask how images — and meaning more generally — come into being at all. From this perspective, generative AI does not challenge human understanding so much as it makes visible a process that has always been at work: the process of closure.
Closure and the Open World
Lawson’s point of departure is a diagnosis of contemporary confusion. In a world without shared certainties, objectivity, or stable meaning, we find ourselves surrounded by competing stories, none of which can claim final authority. Rather than attempting to restore lost foundations, Lawson proposes a radical shift in how we think about the relationship between ourselves and the world.
At the centre of this shift is the distinction between openness and closure. The world, Lawson argues, is not composed of ready-made objects awaiting discovery. It is fundamentally open —undivided, indeterminate, and without inherent structure. What we experience as a world of things, facts, and identities arises only because we actively close this openness.
Closure is not representation. It does not mirror reality or correspond to how things truly are. Instead, it is a practical act of stabilization: a way of “holding still that which cannot be held at all” (Lawson, 2001). Through closure, we draw boundaries, create distinctions, and render the world usable. Language, perception, science, and even the sense of self are all products of this ongoing process.
Generative AI and the Production of Images
At first glance, AI image generation appears radically different from human sense-making. Yet when examined structurally, surprising similarities emerge. Generative models do not begin with objects, scenes, or meanings. They operate within vast spaces of possibility defined by statistical relationships. Prior to generation, there is no face, no hand, no sky — only an indeterminate field shaped by probabilities.
An image emerges through iterative constraint. Conditioning, prompts, and internal weightings progressively narrow what can appear, until a visually coherent configuration stabilizes. The resulting image looks like something, but it does not depict a pre-existing object or scene. It is not retrieved from memory nor measured against reality. It comes into being through a process of closure.
Importantly, this process is indifferent to truth. The criterion is not accuracy but plausibility: does the image cohere, does it satisfy the conditions imposed upon it, does it “work”? In this respect, AI-generated images align closely with Lawson’s account of knowledge as effective intervention rather than correspondence.
AI as an Instantiation of Closure
This leads to the central claim: AI instantiates the process of closure without meaning.
The choice of words matters. To say that AI embodies closure would risk attributing to it experience or intention. “Instantiation,” by contrast, names a concrete realization of a process without implying consciousness or understanding. Closure, for Lawson, is not a mental state but an operation. As such, it can be instantiated wherever indeterminacy is stabilized into form.
In generative AI, closure appears stripped of its biological and existential dimensions. There is no survival at stake, no concern, no orientation toward a world. Yet the same structural movement remains: an open field is progressively constrained until a temporary stability emerges. The image exists not because it is true, but because closure has succeeded.
Seen this way, AI does not imitate human creativity. It externalizes a logic that has always underpinned it. What humans perform implicitly through perception and language, machines now perform explicitly through computation.
Failure, Drift, and the Limits of Closure
One of the most striking features of AI-generated images is their tendency to fail in subtle ways. Hands blur into extra fingers, faces slide into one another, spatial relations collapse. These errors are often described as technical shortcomings. From a Lawsonian perspective, they reveal something deeper.
Closure always fails. No closure fully captures the openness from which it emerges. There is always excess, slippage, or instability. Human closure conceals this failure through habit, embodiment, and shared practices. Machine closure exposes it. The glitches of AI images are not anomalies but expressions of closure’s inherent limits.
The difference, however, is decisive. Humans encounter the failure of closure as confusion, ambiguity, or mystery. AI does not encounter failure at all. It simply produces another stabilization. The absence of lived openness marks the boundary between human and machine closure.
What AI Makes Visible
Generative AI does not threaten meaning; it clarifies its conditions. By instantiating closure without understanding, AI reveals how much of what we take to be meaning depends not on access to truth, but on the successful stabilization of form. Lawson’s framework helps us see that the question is not whether AI understands the world, but how worlds — visual, linguistic, conceptual — are made possible in the first place.
In this sense, AI belongs less to the future of intelligence than to the archaeology of meaning. It does not replace human sense-making, but renders its underlying logic visible. Closure, once an implicit condition of experience, now appears before us — technical, indifferent, and strangely familiar.
Bibliography
Lawson, H. (2001). Closure: A Story of Everything. London: Routledge.
Manovich, L. (2018). AI Aesthetics. Moscow: Strelka Press.
Wittgenstein, L. (1953). Philosophical Investigations. Oxford: Blackwell.

Comments
Post a Comment