ISSN 2634-8578


03/08/2022

Worldmaking
We live in a period of unprecedented proliferation of constructed, internally coherent virtual worlds, which emerge everywhere, from politics to video games. Our mediascape is brimming with rich, immersive worlds ready to be enjoyed and experienced, or decoded and exploited. One effect of this phenomenon is that we are now asking fundamental questions, such as what “consensus reality” is and how to engage with it. Another effect is that there is a need for a special kind of expertise that can deal with designing and organising these worlds – and that is where architects possibly have a unique advantage. Architectural thinking, as a special case of visual, analogy-based synthetic reasoning, is well positioned to become a crucial expertise, able to operate on multiple scales and in multiple contexts in order to map, analyse and organise a virtual world, while at the same time being able to introduce new systems, rules and forms to it.[1]
A special case of this approach is something we can name architectural worldmaking,[2] which refers broadly to practices of architectural design which wilfully and consciously produce virtual worlds, and understand worlds as the main project of architecture. Architects have a unique perspective and could have a say in how virtual worlds are constructed and inhabited, but there is a caveat which revolves around questions of agency, engagement and control. Worldmaking is an approach to learning from both technically-advanced visual and cultural formats such as video games, as well as scientific ways of imaging and sensing, in order to be able to construct new, legitimate, and serious ways of seeing and modelling.
These notions are central to the research seminar called “Games and Worldmaking”, first conducted by the author at SCI-Arc in summer of 2021, which focused on the intersection of games and architectural design, and foregrounded systems thinking as an approach to design. The seminar is part of the ongoing Views of Planet City project, in development at SCI-Arc for the Pacific Standard Time exhibition, which will be organised by the Getty Institute in 2024. In the seminar, we developed the first version of Planet Garden, a planetary simulation game, envisioned to be both an interactive model of complex environmental conditions and a new narrative structure for architectural worldmaking.
Planet Garden is loosely based on Edward O. Wilson’s “Half-Earth” idea, a scenario where the entire human population of the world occupies a single massive city and the rest is left to plants and animals. The Half Earth is an important and very interesting thought experiment, almost a proto-design, a prompt, an idea for a massive, planetary agglomeration of urban matter which could liberate the rest of the planet to heal and rewild.
The question of the game was, how could we actually model something like that? How do we capture all that complexity and nuance, how do we figure out stakes and variables and come up with consequences and conclusions? The game we are designing is a means to model and host hugely complex urban systems which unravel over time, while being able to legibly present an enormous amount of information visually and through the narrative. As a format, a simulation presents different ways of imaging the World and making sense of reality through models.
The work on game design started as a wide exploration of games and precedents within architectural design and imaging operations, as well as abstract systems that could comprise a possible planetary model. The question of models and modelling of systems comes at the forefront and becomes contrasted to existing architectural strategies of representation.
Mythologizing, Representing and Modelling
Among the main influences of this project were the drawings made by Alexander von Humboldt, whose work is still crucial for anyone with an interest in representing and modelling phenomena at the intersection of art and science.[3] If, in the classical sense, art makes the world sensible while science makes it intelligible, these images are a great example of combining these forms of knowledge. Scientific illustrations, Humboldt once wrote, should “speak to the senses without fatiguing the mind”.[4] His famous illustration of Chimborazo volcano in Ecuador shows plant species living at different elevations, and this approach is one of the very early examples of data visualisation, with an intent of making the world sensible and intelligible at the same time. These illustrations also had a strong pedagogical intent, a quality we wanted to preserve, and which can serve almost as a test of legibility.

The project started with a question of imaging a world of nature in the Anthropocene epoch. One of the reasons it is difficult to really comprehend a complex system such as the climate crisis is that it is difficult to model it, which also means to visually represent it in a legible way which humans can understand. This crisis of representation is a well-known problem in literature on the Anthropocene, most clearly articulated in the book Against the Anthropocene, by T.J. Demos.[5]
We do not yet have the tools and formats of visualising that can fully and legibly describe such a complex thing, and this is, in a way, also a failure of architectural imagination. The standard architectural toolkit is limited and also very dated – it is designed to describe and model objects, not “hyperobjects”. One of the project’s main interests was inventing new modalities of description and modelling of complex systems through the interactive software format, and this is one of the ideas behind the Planet Garden project.
Contemporary representational strategies for the Anthropocene broadly fall into two categories, those of mythologising or objectivising. The first approach can be observed in the work of photographers such as Edward Burtynsky and Louis Helbig, where the subject matter of environmental disaster becomes almost a new form of the aesthetic sublime. The second strategy comes out of the deployment and artistic use of contemporary geospatial imaging tools. As is well understood by critics, contemporary geospatial data visualisation tools like Google Earth are embedded in a specific political and economic framework, comprising a visual system delivered and constituted by the post–Cold War and largely Western-based military-state-corporate apparatus. These tools offer an innocent-seeming picture that is in fact a “techno-scientific, militarised, ‘objective’ image”.[6] Such an image displaces its subject and frames it within a problematic context of neutrality and distancing. Within both frameworks, the expanded spatial and temporal scales of geology and the environment exceed human and machine comprehension and thus present major challenges to representational systems.
Within this condition, the question of imaging – understood here as making sensible and intelligible the world of the Anthropocene through visual models – remains, and it is not a simple one. Within the current (broadly speaking) architectural production, this topic is mostly treated through the “design fiction” approach. For example, in the work of Design Earth, the immensity of the problem is reframed through a story-driven, narrative approach which centres on the metaphor, and where images function as story illustrations, like in a children’s book.[7] Another approach is pursued by Liam Young, in the Planet City project,[8] which focuses on video and animation as the main format. In this work, the imaging strategies of commercial science fiction films take the main stage and serve as anchors for the speculation, which serves a double function of designing a new world and educating a new audience. In both cases, it seems, the focus goes beyond design, as these constructed fictions stem from a wilful, speculative exaggeration of existing planetary conditions, to produce a heightened state which could trigger a new awareness. In this sense, these projects serve a very important educational purpose, as they frame the problem through the use of the established and accepted visual languages of storybooks and films.
The key to understanding how design fictions operate is precisely in their medium of production: all of these projects are made through formats (collage, storybook, graphic novel, film, animation) which depend on the logic of compositing. Within this logic, the work is made through a story-dependent arrangement of visual components. The arrangement is arbitrary as it depends only on the demands of the story and does not correspond to any other underlying condition – there is no model underneath. In comparison, a game such as, for example, SimCity is not a fiction precisely because it depends on the logic of a simulation: a testable, empirical mathematical model which governs its visual and narrative space. A simulation is fundamentally different from a fiction, and a story is not a model.
This is one of the reasons why it seems important to rethink the concept of design fiction through the new core idea of simulation.[9] In the book Virtual Worlds as Philosophical Tools, Stefano Gualeni traces a lineage of thinking about simulations to Espen Aarseth’s 1994 text called Hyper/Text/Theory, and specifically to the idea of cybertextuality. According to this line of reasoning, simulations contain an element not found in fiction and thus need an ontological category of their own: “Simulations are somewhere between reality and fiction: they are not obliged to represent reality, but they have an empirical logic of their own, and therefore should not be called fictions.”[10] This presents us with a fundamental insight into the use of simulations as the future of architectural design: they model internally coherent, testable worlds and go beyond mere fiction-making into worldmaking proper.
Simulations, games and systems
In the world of video games, there exists a genre of “serious” simulation games, which comprises games like Maxis software’s SimCity and The Sims, as well as some other important games like Sid Meier’s Civilization and Paradox Studio’s Stellaris. These games are conceptually very ambitious and extremely complex, as they model the evolution of whole societies and civilisations, operate on very long timescales, and consist of multiple nested models that simulate histories, economies and evolutions of different species at multiple scales. One important feature and obligation of this genre is to present a coherent, legible image of the world, to give a face to the immense complexity of the model. The “user interface” elements of these kinds of games work together to tell a coherent story, while the game world, rendered in full 3D in real time, provides an immersive visual and aesthetic experience for the player. Contrary to almost any other type of software, these interfaces are more indebted to the history of scientific illustration and data visualisation than they are to the history of graphic design. These types of games are open-ended and not bound to one goal, and there is rarely a clear win state.

Another feature of the genre is a wealth of underlying mathematical models, each providing for the emergence of complexity and each carrying its own assumptions and biases. For example, SimCity is well known (and some would say notorious) for its rootedness in Jay Forrester’s Urban Dynamics approach to modelling urban phenomena, which means that its mathematical model delivers very specific urban conditions – and ultimately, a very specific vision of what a city is and could be.[11] One of the main questions in the seminar became how we might update this approach on two fronts: by rethinking the mathematical model, and by rethinking urban assumptions of the conceptual model.
The work of the game designer Will Wright, the main designer behind the original SimCity, as well as The Sims and Spore, is considered to be at the origin of simulation games as a genre. Wright has developed a vast body of knowledge on modelling simulations, some of which he presented in his 2003 influential talk at the Game Developers Conference (GDC), titled “Dynamics for Designers”.[12] In this talk, Wright outlines a fully-fledged theory of modelling of complex phenomena for interactivity, focusing on topics such as “How we can use emergence to model larger possibility spaces with simpler components”. Some of the main points: science is a modelling activity, and until now, it has used traditional mathematics as its primary modelling method. This has some limits when dealing with complex dynamic and emergent systems. Since the advent of the computer, simulation has emerged as an alternative way of modelling. These are very different: in Wright’s view, maths is a more linear process, with complex equations; simulation is a more parallel process with simpler components interacting together. Wright also talks about stochastic (random probability distribution) and Monte Carlo (“brute force”) methods as examples of the simulation approach.

Wright’s work was a result of a deep interest in exploring how non-linear models are constructed and represented within the context of interactive video games, and his design approach was to invent novel game design techniques based directly on System Dynamics, a discipline that deals with the modelling of complex, unpredictable and non-linear phenomena. The field has its roots in the cybernetic theories of Norbert Wiener, but it was formalised and created in the mid-1950s by Professor Jay Forrester at MIT, and later developed by Donella H. Meadows in her seminal book Thinking in Systems.[13]
System dynamics is an approach to understanding the non-linear behaviour of complex systems over time using stocks, flows, internal feedback loops, table functions and time delays.[14,15] Forrester (1918–2016) was an American computer engineer and systems scientist, credited as the founding father” of system dynamics. He started by modelling corporate supply chains and went on to model cities by describing “the major internal forces controlling the balance of population, housing and industry within an urban area”, which he claimed could “simulate the life cycle of a city and predict the impact of proposed remedies on the system”.[16] In the book Urban Dynamics, Forrester had turned the city into a formula with just 150 equations and 200 parameters.[17] The book was very controversial, as it implied extreme anti-welfare politics and, through its “objective” mathematical model, promoted neoliberal ideas of urban planning.
In another publication, called World Dynamics, Forrester presented “World2”, a system dynamics model of our world which was the basis of all subsequent models predicting a collapse of our socio-technological-natural system by the mid 21st century. Nine months after World Dynamics, a report called Limits to Growth was published, which used the “World3” computer model to simulate the consequences of interactions between the Earth and human systems. Commissioned by the Club of Rome, the findings of the study were first presented at international gatherings in Moscow and Rio de Janeiro in the summer of 1971, and predicted societal collapse by the year 2040. Most importantly, the report put the idea of a finite planet into focus.

The main case study in the seminar was Wright’s 1990 game SimEarth, a life simulation video game in which the player controls the development of a planet. In developing SimEarth, Wright worked with the English scientist James Lovelock, who served as an advisor and whose Gaia hypothesis of planetary evolution was incorporated into the game. Continuing the systems dynamics approach developed for SimCity, SimEarth was an attempt to model a scientifically accurate approximation of the entire Earth system through the application of customised systems dynamics principles. The game modelled multiple interconnected systems and included realistic feedback between land, ocean, atmosphere, and life itself. The game’s user interface even featured a “Gaia Window”, in direct reference to the Gaia theory which states that life plays an intimate role in planetary evolution and the regulation of planetary systems.
One of the tutorial levels for the SimEarth featured a playable model of Lovelock’s “Daisyworld” hypothesis, which postulates that life itself evolves to regulate its environment, forming a feedback loop and making it more likely for life to thrive. During the development of a life-detecting device for NASA’s Viking lander mission to Mars, Lovelock made a profound observation, that life tends to increase the order of its surroundings, and that studying the atmospheric composition of a planet will provide evidence enough of life’s existence. Daisyworld is a simple planetary model designed to show the long-term effects of coupling and interdependence between life and its environment. In its original form, it was introduced as a defence against criticism that his Gaia theory of the Earth as a self-regulating homeostatic system requires teleological control rather than being an emergent property. The central premise, that living organisms can have major effects on the climate system, is no longer controversial.

In SimEarth, the planet itself is alive, and the player is in charge of setting the initial conditions as well as maintaining and guiding the outcomes through the aeons. Once a civilisation emerges, the player can observe the various effects, such as the impacts of changes in atmospheric composition due to fossil fuel burning, or the temporary expansion of ice caps in the aftermath of a major nuclear war. SimEarth’s game box came with a 212-page game manual that was at once a comprehensive tutorial on how to play and an engrossing lesson in Earth sciences: ecology, geology, meteorology and environmental ethics, written in accessible language that anyone could understand.


SimEarth and other serious simulation games in general represent a way that games could serve a function of public education while remaining a form of popular entertainment. This genre also represents an incredible validation of claims that video games can be valuable cultural artifacts. Ian Bogost writes: “This was a radical way of thinking about video games: as non-fictions about complex systems bigger than ourselves. It changed games forever – or it could have, had players and developers not later abandoned modelling systems at all scales in favor of representing embodied, human identities.”[18]
Lessons that architectural design can learn from these games are many and varied, the most important one being that it is possible to think about big topics by employing models and systems while maintaining an ethos of exploration, play and public engagement. In this sense, one could say that a simulation game format might be a contemporary version of Humboldt’s illustration, with the added benefit of interactivity; but as we have seen, there is a more profound, crucial difference – this format goes beyond just a representation, beyond just a fiction, into worldmaking.
As a result of this research, the students in the seminar utilised Unreal Engine to create version one (v.1) of Planet Garden, a multi-scalar, interactive, playable model of a self-sustaining, wind and solar-powered robotic garden, set in a desert landscape. The simulation was envisioned as a kind of reverse city builder, where a goal of the game is to terraform a desert landscape by deploying different kinds of energy-producing technologies until the right conditions are met for planting and the production of oxygen. The basic game loop is based on the interaction between the player and four main resources: energy, water, carbon, and oxygen. In the seminar, we also created a comprehensive game manual. The aims of the project were to learn how to model dynamic systems and to explore how game workflows can be used as ways to address urban issues.
Planet Garden is projected to become a big game for the Getty exhibition; a simulation of a planetary ecosystem as well as a city for 10 billion people. We aim to model various aspects of the planetary city, and the player will be able to operate on multiple spatial sectors and urban scales. The player can explore different ways to influence the development and growth of the city and test many scenarios, but the game will also run on its own, so that the city can exist without direct player input. Our game utilises core design principles that relate to system dynamics, evolution, environmental conditions, and change. A major point is the player’s input and decision-making process, which influence the outcome of the game. The game will also be able to present conditions and consequences of this urban thought experiment, as something is always at stake for the player.
The core of the simulation-as-a-model idea is that design should have testable consequences. The premise of the project is not to construct a single truthful, total model of an environment but to explore ways of imaging the world through simulation and open new avenues for holistic thinking about interdependence of actors, scales and world systems. If the internet ushered a new age of billions of partial identarian viewpoints, all aggregating into an inchoate world gestalt, is it a time to rediscover a new image of the interconnected world?








References
[1] For a longer discussion on this, see O. M. Ungers, City Metaphors, (Cologne: Buchhandlung Walther Konig, 2011). For the central place of analogies in scientific modeling, see M. Hesse, Models and Analogies in Science, and also Douglas Hofstadter, Surfaces and Essences: Analogy as the Fuel and Fire of Thinking (Basic Books, 2013).
[2] The term “worldmaking” comes from Nelson Goodman’s book Ways of Worldmaking, and is used here to be distinguished from worldbuilding, a more narrow, commercially oriented term.
[3] For a great introduction to the life and times of Alexander Von Humboldt, see A. Wulf, The Invention of Nature: Alexander von Humboldt’s New World (New York: Alfred A. Knopf, 2015).
[4] Quoted in H. G. Funkhouser, “Historical development of the graphical representation of statistical data”, Osiris 3 (1937), 269–404.
[5] T. J. Demos, Against The Anthropocene (Berlin: Sternberg Press, 2016).
[6] T. J. Demos, Against The Anthropocene (Berlin: Sternberg Press 2016).
[7] Design Earth, Geostories, The Planet After Geoengineering (Barcelona: Actar, 2019 and 2021).
[8] L. Young, Planet City, (Melbourne: Uro Publications, 2020).
[9] For an extended discussion of the simulation as a format, see D. Jovanovic, “Screen Space, Real Time”, Monumental Wastelands 01, eds. D. Lopez and H. Charbel (2022).
[10] S. Gualeni, Virtual Worlds as Philosophical Tools, (Palgrave Macmillan, 2015)
[11] For an extended discussion on this, see Clayton Ashley, The Ideology Hiding in SimCity’s Black Box, https://www.polygon.com/videos/2021/4/1/22352583/simcity-hidden-politics-ideology-urban-dynamics
[12] W. Wright, Dynamics for Designers, GDC 2003 talk, https://www.youtube.com/watch?v=JBcfiiulw-8.
[13] D. H. Meadows, Thinking in Systems, (White River Junction: Chelsea Green Publishing, 2008).
[14] Arnaud M., “World2 model, from DYNAMO to R”, Towards Data Science, 2020, https://towardsdatascience.com/world2-model-from-dynamo-to-r-2e44fdbd0975.
[15] Wikipedia, “System Dynamics”, https://en.wikipedia.org/wiki/System_dynamics.
[16] Forrester, Urban Dynamics (Pegasus Communications, 1969).
[17] K. T. Baker, “Model Metropolis”, Logic 6, 2019, https://logicmag.io/play/model-metropolis.
[18] I. Bogost, “Video games Are Better Without Characters”, The Atlantic (2015), https://www.theatlantic.com/technology/archive/2015/03/video-games-are-better-without-characters/387556.

25/10/2020
Parts, chunks, stacks and aggregates are the bits of computational architecture today. Why do mereologies – or buildings designed from part-to-whole – matter? All too classical, the roughness of parts seems nostalgic for a project of the digital that aims for dissolving building parts towards a virtual whole. Yet if parts shrink down to computable particles and matter, and there exists a hyper-resolution of a close to an infinite number of building parts, architecture would dissolve its boundaries and the capacity to frame social encounters. Within fluidity, and without the capacity to separate, architecture would not be an instrument of control. Ultimately, freed from matter, the virtual would transcend from the real and form finally would be dead. Therein is the prospect of a fluid, virtual whole.
The Claustrophobia of a City that Transcends its Architecture
In the acceleration from Data to Big Data, cities have become more and more virtual. Massive databases have liquefied urban form. Virtual communication today plays freely across the material boundaries of our cities. In its most rudimentary form virtuality is within the digital transactions of numbers, interests and rents. Until a few years ago, financial investments in architectural form were equatable according to size and audience, e.g. as owner-occupied flats, as privately rented houses or as lease holding.[1] Today capital flows freely scatter across the city at the scale of the single luxury apartment. Beyond a certain threshold in computational access, data becomes big. By computing aggregated phone signal patterns or geotagged posts, virtual cities can emerge from the traces of individuals. These hyperlocal patterns are more representative of a city than its physical twin. Until recently, architecture staged the urban through shared physical forms: the sidewalk, lane or boulevard. Adjacent to cars, walkable for pedestrians or together as citizens, each form of being urban included an ideology of a commons, and grounded with that particular parts of encountering.

In contrast, a hyper-local urban transcends lanes and sidewalks. Detached from the architecture of the city, with no belonging left, urban speculation has withdrawn into the private sphere. Today, urban value is estimated by counting private belongings only, with claustrophobic consequences. An apartment that is speculatively invested displaces residents. The housing shortage in the big cities today is not so much a problem of lack of housing, but instead of vacant space, accessible not to residents but to interests they hold in the hyper-urban.[2] The profit from rent and use of space itself is marginal compared to the profit an embodied urban speculation adds to the property. The possibility of mapping every single home as data not only adds interest, like a pension to a home but literally turns a home into a pension.[3] However this is not for its residents but for those with access to resources. Currently, computing Big Data expands and optimises stakeholders’ portfolios by identifying undervalued building assets.[4] However, the notion of ‘undervalued’ is not an accurate representation of assets.
Hyper-localities increase real estate’s value in terms of how their inhabitants thrive in a neighbourhood through their encounters with one another and their surrounding architecture. The residents themselves then unknowingly produce extra value. The undervaluing of an asset is the product of its residents, and like housework, is unpaid labour. In terms of the exchange of capital, additional revenue from a property is usually paid out as a return to the shareholders who invested in its value. Putting big data-driven real estate into that equation would then mean that they would have to pay revenues to their residents. If properties create surplus value from the data generated by their residents, then property without its residents has less worth and is indeed over-, but not under-, valued.

The city uses vehicles for creating public revenue by governing the width of a street’s section or the height of a building. Architecture’s role was to provide a stage for that revenue to be created. For example the Seagram Building (van der Rohe, Johnson, 1958) created a “public” plaza by setting back its envelope in exchange for a little extra height. By limiting form, architecture could create space for not only one voice, but many voices. Today, however, the city’s new parameters hidden in the fluidity of digital traces cannot be governed by the boundaries of architecture anymore. Outlined already 40 years ago, when the personal computer became available, Gilles Deleuze forecasted that “Man is not anymore man enclosed”.[5] At that time, and written as a “Postscript on the Societies of Control”, the fluid modulation of space prospected a desirable proposition. By liquefying enclosures, the framework of the disciplinary societies of Foucault’s writings would disappear. In modern industrial societies, Deleuze writes, enclosures were moulds for casting distinct environments, and in these vessels, individuals became masses of the mass society.[6] For example, inside a factory, individuals were cast as workers, inside schools as students. Man without a cast and without an enclosure seemed to be freed from class and struggle. The freedom of an individual was interlinked with their transcendence from physical enclosures.

During the last forty years, the relation between a single individual and the interior framed architecture rightly aimed to dissolve the institutional forms of enclosures that represented social exclusion at their exterior. Yet, in this ambition alternative forms for the plural condition of what it means to be part of a city were not developed. Reading Deleuze further, a state without enclosures also does not put an end to history. The enclosures of control dissolve only to be replaced. Capitalism would shift to another mode of production. When industrial exchange bought raw materials and sold finished products, now it would buy the finished products and profit from the assemblies of those parts. The enclosure is then exchanged with codes that mark access to information. Individuals would not be moulded into masses but considered as individuals: accessed as data, divided into proper parts for markets, “counted by a computer that tracks each person’s position enabling universal modulation.”[7] Forty years in, Deleuze’s postscript has become the screenplay for today’s reality.
Hyper-parts: Spatial Practices of representations
A house is no longer just a neutral space, an enclosing interior where value is created, realised and shared. A home is the product of social labour; it is itself the object of production and, consequently, the creation of surplus value. By shifting from enclosure to asset, the big data-driven economy has also replaced the project behind modernism: humanism. Architecture today is post-human. As Rosi Braidotti writes, “what constitutes capital value today is the informational power of living matter itself”.[8] The human being as a whole is displaced from the centre of architecture. Only parts of it, such as its “immanent capacities to form surplus-value”, are parts of a larger aggregation of architecture. Beyond the human, the Hyper-city transcends the humane. A virtual city is freed from its institutions and constituent forms of governance. Economists such as Thomas Piketty describe in painstaking detail how data-driven financial flows undermine common processes of governance, whether urban, regional, or national, in both speed and scale. Their analysis shows that property transactions shelled in virtual value-creation-bonds are opaque to taxation. Transcending regulatory forms of governance, one can observe the increase of inequalities on a global scale. Comparable to the extreme wealth accumulation at the end of the nineteenth century, Piketty identifies similar neo-proprietarian conditions today, seeing the economy shifting into a new state he coins as “hypercapitalism”.[9] From Timothy Morton’s “hyper-objects” to hypercapitalism, hyper replaces the Kantian notion of transcendence. It expresses not the absorption of objects into humanism, but its withdrawal. In contrast to transcendence, which subordinates things to man’s will, the hyper accentuates the despair of the partial worlds of parts – in the case of Morton in a given object and in the case of Piketty in a constructed ecology.
When a fully automated architecture emerged, objects oriented towards themselves, and non-human programs began to refuse the organs of the human body. Just as the proportions of a data center are no longer walkable, the human eye can no longer look out of a plus-energy window, because it tempers the house, but not its user. These moments are hyper-parts: when objects no longer transcend into the virtual but despair in physical space. More and more, with increasing computational performance, following the acronym O2O (from online to offline),[10] virtual value machines articulate physical space. Hyper-parts place spatial requirements. A prominent example is Katerra, the unicorn start-up promising to take over building construction using full automation. In its first year of running factories, Katerra advertises that it will build 125,000 mid-rise units in the United States alone. If this occurred, Katerra would take around 30% of the mid-rise construction market in the company’s local area. Yet its building platform consists of only twelve apartment types. Katerra may see the physical homogeneity as an enormous advantage as it increases the sustainability of its projects. This choice facilitates financial speculation, as the repetition of similar flats reduces the number of factors in the valuing of apartments and allows quicker monetary exchange, freed from many variables. Sustainability refers not to any materiality but to the predictability of its investments. Variability is still desired, but oriented towards finance and not to inhabitants. Beyond the financialisation of a home, digital value machines create their own realities purely through the practice of virtual operations.

Here one encounters a new type of spatial production: the spatial practice of representations. At the beginning of what was referred to as “late capitalism”, the sociologist and philosopher Henri Lefebvre proposed three spatialities which described modes of exchange through capitalism.[11] The first mode, a spatial practice referred to a premodern condition, which by the use of analogies interlinked objects without any forms of representation—the second, representations of space linked directly to production, the organic schemes of modernism. The third representational spaces express the conscious trade with representations, the politics of postmodernism, and their interest in virtual ideas above the pure value of production. Though not limited to three only, Lefebvre’s intention was to describe capitalism as “an indefinite multitude of spaces, each one piled upon, or perhaps contained within, the next”.[12] Lefebvre differentiated the stages in terms of their spatial abstraction. Incrementally, virtual practices transcended from real-to-real to virtual-to-real to virtual-to-virtual. But today, decoupled from the real, a virtual economy computes physically within spatial practices of representations. Closing the loop, the real-virtual-real, or new hyper-parts, do not subordinate the physical into a virtual representation, instead, the virtual representation itself acts in physical space.
This reverses the intention of modernism orientated towards an organic architecture by representing the organic relationships of nature in geometric thought. The organicism of today’s hypercomputation projects geometric axioms at an organic resolution. What was once a representation and a geometry distant from human activity, now controls the preservation of financial predictability.
The Inequalities Between the Parts of the Virtual and the Parts of the Real
Beyond the human body, this new spatial practice of virtual parts today transcends the digital project that was limited to a sensorial interaction of space. This earlier understanding of the digital project reduced human activity to organic reflexes only, thus depriving architecture of the possibility of higher forms of reflection, thought and criticism. Often argued through links to phenomenology and Gestalt theory, the simplification of architectural form to sensual perception has little to do with phenomenology itself. Edmund Husserl, arguably the first phenomenologist, begins his work with considering the perception of objects, not as an end, but to examine the modes of human thinking. Examining the logical investigations, Husserl shows that thought can build a relation to an object only after having classified it, and therefore, partitioned it. By observing an object before considering its meaning, one classifies an object, which means identifying it as a whole. Closer observations recursively partition objects into more unaffected parts, which again can be classified as different wholes.[13] Husserl places parts before both thought and meaning.

Derived from aesthetic observations, Husserl’s mereology was the basisof his ethics, and was therefore concluded in societal conceptions. In his later work, Husserl’s analysis is an early critique of the modern sciences.[14] For Husserl, in their efforts to grasp the world objectively, the sciences have lost their role in enquiring into the meaning of life. In a double tragedy, the sciences also alienated human beings from the world. Husserl thus urged the sciences to recall that they ground their origins in the human condition, as for Husserl humanism was ultimately trapped in distancing itself further from reality.
One hundred years later, Husserl’s projections resonate in “speculative realism”. Coined By Levi Bryant as “strange mereology”,[15] objects, their belongings, and inclusions are increasingly strange to us. The term “strange” stages the surprise that one is only left with speculative access. However, ten years in, speculation is not distant anymore. That which transcends does not only lurk in the physical realm. Hyper-parts figurate ordinary scales today, namely housing, and by this transcend the human(e) occupation.
Virtual and physical space are compositionally comparable. They both consist of the same number of parts, yet they do not. If physical elements belong to a whole, then they are also part of that to which their whole belongs. In less abstract terms, if a room is part of an apartment, the room is also part of the building to which the apartment belongs. Materially bound part relationships are always transitive, hierarchically nested within each other. In virtual space and the mathematical models with which computers are structured today, elements can be included within several independent entities. A room can be part of an apartment, but it can also be part of a rental contract for an embassy. A room is then also part of a house in the country in which the house is located. But as part of an embassy, the room is at the same time part of a geographically different country on an entirely different continent than the building that houses the embassy. Thus, for example, Julian Assange, rather than boarding a plane, only needed to enter a door on a street in London to land in Ecuador. Just with a little set theory, in the virtual space of law, one can override the theory of relativity with ease.
Parts are not equal. Physical parts belong to their physical wholes, whereas virtual parts can be included in physical parts but don’t necessarily belong to their wholes. Far more parts can be included in a virtual whole than parts that can belong to a real whole. When the philosopher Timothy Morton says “the whole is always less than the sum of its parts”,[16] he reflects the cultural awareness that reality breaks due to asymmetries between the virtual and the real. A science that sets out to imitate the world is constructing its own. The distance which Husserl spoke of is not a relative distance between a strange object and its observer, but a mereological distance, when two wholes distance each other because they consist of different parts. In its effort to reconstruct the world in ever higher resolution, modernism, and in its extension the digital project, has overlooked the issue that the relationship between the virtual and the real is not a dialogue. In a play of dialectics between thought and built environment, modernism understood design as a dialogue. In extending modern thought, the digital project has sought to fulfill the promise of performance, that a safe future could be calculated and pre-simulated in a parallel, parametric space. Parametricism, and more generally what is understood as digital architecture, stands not only for algorithms, bits, and rams but for the far more fundamental belief that in a virtual space, one can rebuild reality. However, with each resolution that science seeks to mimic the world, the more parts it adds to it.

The Poiesis of a Virtual Whole
The asymmetry between physical and virtual parts is rooted in Western classicism. In early classical sciences, Aristotle divided thinking into the trinity of practical action, observational theory and designing poiesis. Since the division in Aristotle’s Nicomachean Ethics, design is a part of thought and not part of objects. Design is thus a knowledge, literally something that must first be thought. Extending this contradiction to the real object, design is not even concerned with practice, with the actions of making or using, but with the metalogic of these actions, the in-between between the actions themselves, or the art of dividing an object into a chain of steps with which it can be created. In this definition, design does not mean to anticipate activities through the properties of an object (function), nor to observe its properties (materiality), but through the art of partitioning, structuring and organising an object in such a way that it can be manufactured, reproduced and traded.
To illustrate poiesis, Aristotle made use of architecture.[17] No other discipline exposes the poetic gap so greatly between theory, activity and making. Architecture first deals with the coordination of the construction of buildings. As the architecture historian Mario Carpo outlines in detail, revived interest in classicism and the humanistic discourse on architecture began in the Renaissance with Alberti’s treatise: a manual that defines built space, and ideas about it solely through word. Once thought and coded into words, the alphabet enabled the architect to physically distance from the building site and the built object.[18] Architecture as a discipline then does not start with buildings, but with the first instructions written by architects used to delegate the building.
A building is then anticipated by a virtual whole that enables one to subordinate its parts. This is what we usually refer to as architecture: a set of ideas that preempt the buildings they comprehend. The role of the architect is to imagine a virtual whole drawn as a diagram, sketch, structure, model or any kind of representation that connotates the axes of symmetries and transformations necessary to derive a sufficient number of parts from it. Architectural skill is then valued by the coherence between the virtual and the real, the whole and its parts, the intention and the executed building. Today’s discourse on architecture is the surplus of an idea. You might call it the autopoiesis of architecture – or merely a virtual reality. Discourse on architecture is a commentary on the real.

Partitioning Architectures
From the very outset, architecture distanced itself from the building, yet also aimed to represent reality. Virtual codes were never autonomous from instruments of production. The alphabet and the technology of the printing press allowed Alberti to describe a whole ensemble distinct from a real building. Coded in writing, printing allowed for the theoretically infinite copies of an original design. Over time, the matrices of letters became the moulds of the modern production lines. However, as Mario Carpo points out, the principle remained the same.[19] Any medium that incorporates and duplicates an original idea is more architecture than the built environment itself. Belonging to a mould, innovation in architecture research could be valued in two ways. Quantitatively, in its capacity to partition a building in increasing resolution. Qualitatively, in its capacity to represent a variety of contents with the same form. By this, architecture faced the dilemma that one would have to design a reproducible standard that could partition as many different forms as possible to build non-standard figurations.[20]
The dilemma of the non-standard standard moulds is found in Sebastiano Serlio’s transcription of Alberti’s codes into drawings. In the first book of his treatise, Serlio introduces a descriptive geometry to reproduce any contour and shape of a given object through a sequence of rectangles.[21] For Serlio, the skill of the architect is to simplify the given world of shapes further until rectangles become squares. The reduction finally enables the representation of physical reality in architectural space using an additive assembly of either empty or full cubes. By building a parallel space of cubes, architecture can be partitioned into a reproducible code. In Serlio’s case, architecture could be coded through a set of proportional ratios. However, from that moment on, stairs do not consist only of steps, and have to be built with invisible squares and cubes too.
Today, Serlio’s architectural cubes are rendered obsolete by 3D printed sand. By shrinking parts to the size of a particle of dust, any imaginable shape can be approximated by adding one kind of part only. 3D printing offers a non-standard standard, and with this, five hundred years of architectural development comes to an end.

Replicating: A Spatial Practice of Representations
3D printing dissolved existing partitioning parts to particles and dust. A 3D-printer can not only print any shape but can also print at any place, at any time. The development of 3D printing was mainly driven by DIY hobbyists in the Open Source area. One of the pioneering projects here is the RepRap project, initiated by Adrian Bowyer.[22] RepRap is short for replicating rapid prototyping machine. The idea behind it is that if you can print any kind of objects, you can also print the parts of the machine itself. This breaks with the production methods of the Modern Age. Since the Renaissance, designers have crafted originals and used these to build a mould from those so that they can print as many copies as possible. This also explains the economic valuation of the original and why authorship is so vehemently protected in legal terms. Since Alberti’s renunciation of drawings for a more accurate production of his original idea through textual encoding, the value of an architectural work consisted primarily in the coherence of a representation with a building: a play of virtual and real. Consequently, an original representation that cast a building was more valued than its physical presentation. Architecture design was oriented to reduce the amount of information needed to cast. This top-down compositional thinking of original and copy becomes obsolete with the idea of replication.
Since the invention of the printing press, the framework of how things are produced has not changed significantly. However, with a book press, you can press a book, but with a book, you can’t press a book. Yet with a 3D printer, you can print a printer. A 3D printer does not print copies of an original, not even in endless variations, but replicates objects. The produced objects are not duplicates because they are not imprints that would be of lower quality. Printed objects are replicas, objects with the same, similar, or even additional characteristics as their replicator.

A 3D printer is a groundbreaking digital object because it manifests the foundational principle of the digital – replication – on the scale of architecture. The autonomy of the digital is based not only on the difference between 0 and 1 but on the differences in their sequencing. In mathematics in the 1930s, the modernist project of a formal mimicry of reality collapsed through Godel’s proof of the necessary incompleteness of all formal systems. Mathematicians then understood that perhaps far more precious knowledge could be gained if we could only learn to distance ourselves from its production. The circle of scientists around John von Neumann, who developed the basis of today’s computation, departed from one of the smallest capabilities in biology: to reproduce. Bits, as a concatenation of simple building blocks and the integrated possibility of replication, made it possible, just by sequencing links, to build first logical operations, and connecting those programs to today’s artificial networks.[23] Artificial intelligence is artificial but it is also alive intelligence.
To this day, computerialisation, not computation is at work in architecture. By pursuing the modern project of reconstructing the world as completely as possible, the digital project computerised a projective cast[24] in high resolution. Yet this was done without transferring the fundamental principles of interlinking and replication to the dimensions of the built space.

From Partitioning to Partaking
The printing press depends on a mould to duplicate objects. The original mould was far more expensive to manufacture than its copies, so the casting of objects had to bundle available resources. This required high investments in order to start production, leading to an increasing centralisation of resources in order to scale the mass-fabrication of standard objects for production on an assembly line. Contrarily, digital objects do not need a mould. Self-replication provided by 3D printing means that resources do not have to be centralised. In this, digital production shifts to distributed manufacturing.[25]
Independent from any mould, digital objects as programs reproduce themselves seamlessly at zero marginal costs.[26] As computation progresses, a copy will then have less and less value. Books, music and films fill fewer and fewer shelves because it no longer has value to own a copy when they are ubiquitously available online. And the internet does not copy; it links. Although not fully yet integrated into its current TCP-IP protocol,[27] the basic premise of hyperlinking is that linked data adds value.[28] Links refer to new content, further readings, etc. With a close to infinite possibility to self-reproduce, the number of objects that can be delegated and repeated becomes meaningless. What then counts is hyper-, is the difference in kind between data, programs and, eventually, building parts. In his identification of the formal foundations of computation, the mathematician Nelson Goodman pointed out that beyond a specific performance of computation, difference, and thus value, can only be generated when a new part is added to the fusion of parts.[29] What is essential for machine intelligence is the dimensionality of its models, e.g., the number of its parts. Big data refers less to the amount of data, but more to the number of dimensions of data.[30]

With increasing computation, architecture shifted from an aesthetic of smoothness that celebrated the mastership of an infinite number of building parts to roughness. Roughness demands to be thought (brute). The architecture historian Mario Carpo is right to frame this as nostalgic, as “digital brutalism”.[31] Similar to brutalism that wanted to stimulate thought, digital roughness aims to extend spatial computability, the capability to extend thinking, and the architecture of a computational hyper-dimensionality. Automated intelligent machines can accomplish singular goals but are alien to common reasoning. Limited around a ratio of a reality, a dimension, a filter, or a perspective, machines obtain partial realities only. Taking them whole excludes those who are not yet included and that which can’t be divided: it is the absolute of being human(e).
A whole economy evolved from the partial particularity of automated assets ahead of the architectural discipline. It would be a mistake to understand the ‘sharing’ of the sharing economy as having something “in common”. On the contrary, computational “sharing” does not partition a common use, but enables access to multiple, complementary value systems in parallel.

Cities now behave more and more like computers. Buildings are increasingly automated. They use fewer materials and can be built in a shorter time, at lower costs. More buildings are being built than ever before, but fewer people can afford to live in them. The current housing crisis has unveiled that buildings no longer necessarily need to house humans or objects. Smart homes can optimise material, airflow, temperature or profit, but they are blind to the trivial.

It is a mistake to compute buildings as though they are repositories or enclosures, no matter how fine-grain their resolution is. The value of a building is no longer derived only from the amount of rent for a slot of space, but from its capacities to partake with. By this, the core function of a building changes from inhabitation to participation. Buildings do not anymore frame and contain: they bind, blend, bond, brace, catch, chain, chunk, clamp, clasp, cleave, clench, clinch, clutch, cohere, combine, compose, connect, embrace, fasten, federate, fix, flap, fuse, glue, grip, gum, handle, hold, hook, hug, integrate, interlace, interlock, intermingle, interweave, involve, jam, join, keep, kink, lap, lock, mat, merge, mesh, mingle, overlay, palm, perplex, shingle, stick, stitch, tangle, tie, unit, weld, wield, and wring.
In daily practice, BIM models do not highlight resolution but linkages, integration and collaboration. With further computation, distributed manufacturing, automated design, smart contracts and distributed ledgers, building parts will literally compute the Internet of Things and eventually our built environment, peer-to-peer, or better, part-to-part – via the distributive relationships between their parts. For the Internet of Things, what else should be its hubs besides buildings? Part-to-part habitats can shape values through an ecology of linkages, through a forest of participatory capacities. So, what if we can participate in the capacities of a house? What if we no longer have to place every brick, if we no longer have to delegate structures, but rather let parts follow their paths and take their own decisions, and let them participate amongst us together in architecture?


[1] S. Kostof, The City Assembled: The Elements of Urban Form Through History (Boston: Little, Brown and Company, 1992).
[2] J. Aspen, "Oslo – the triumph of zombie urbanism." Edward Robbins, ed., Shaping the city, (New York: Routledge, 2004).
[3] The World Bank actively promotes housing as an investment opportunity for pension funds, see: The World Bank Group, Housing finance: Investment opportunities for pension funds (Washington: The World Bank Group, 2018).
[4] G. M. Asaftei, S. Doshi, J. Means, S. Aditya, “Getting ahead of the market: How big data is transforming real estate”, McKinsey and Company (2018).
[5] G. Deleuze, “Postscript on the societies of control,” October, 59: 3–7 (1992), 6.
[6] Ibid, 4.
[7] Ibid, 6.
[8] R. Braidotti, Posthuman Knowledge (Medford, Mass: Polity, 2019).
[9] T. Piketty, Capital and Ideology (Cambridge, Mass: Harvard University Press, 2020).
[10] A. McAfee, E. Brynjolfsson, Machine, platform, crowd: Harnessing our digital future (New York: W.W. Norton & Company, 2017).
[11] H. Lefebvre, The Production of Space (Oxford: Basil Blackwell, 1991), 33.
[12] Ibid, 8.
[13] E. Husserl, Logische Untersuchungen: Zweiter Teil Untersuchungen zur Phänomenologie und Theorie der Erkenntnis.trans. "Logical investigations: Part Two Investigations into the phenomenology and theory of knowledge" (Halle an der Saale: Max Niemeyer, 1901).
[14] E. Husserl, Cartesianische Meditationen und Pariser Vortraege. trans. "Cartesian meditations and Parisian lectures" (Haag: Martinus Nijhoff, Husserliana edition, 1950).
[15] L. Bryant, The Democracy of Objects (Ann Arbor: University of Michigan Library, 2011).
[16] T. Morton, Being Ecological (London: Penguin Books Limited, 2018), 93.
[17] Aristotle, Nicomachean Ethics 14, 1139 a 5-10.
[18] M. Carpo, Architecture in the Age of Printing (Cambridge, Mass: MIT Press, 2001).
[19] M. Carpo, The Alphabet and the Algorithm (Cambridge, Mass: MIT Press, 2011).
[20] F. Migayrou, Architectures non standard (Editions du Centre Pompidou, Paris, 2003).
[21] S. Serlio, V. Hart, P. Hicks, Sebastiano Serlio on architecture (New Haven and London: Yale University Press, 1996).
[22] R. Jones, P. Haufe, E. Sells, I. Pejman, O. Vik, C. Palmer, A. Bowyer, “RepRap – the Replicating Rapid Prototyper,” Robotica 29, 1 (2011), 177–91.
[23] A. W. Burks, Von Neumann's self-reproducing automata: Technical Report (Ann Arbor: The University of Michigan, 1969).
[24] R. Evans, The Projective Cast: Architecture and Its Three Geometries (Cambridge, Massachusetts: MIT Press, 1995).
[25] N. Gershenfeld, “How to make almost anything: The digital fabrication revolution,” Foreign Affairs, 91 (2012), 43–57.
[26] J. Rifkin. The Zero Marginal Cost Society: The Internet of Things, the Collaborative Commons, and the Eclipse of Capitalism (New York: Palgrave Macmillan, 2014).
[27] B. Bratton, The Stack: On Software and Sovereignty (Cambridge, Massachusetts: MIT Press, 2016).
[28] J. Lanier, Who Owns the Future? (New York: Simon and Schuster, 2013).
[29] N. Goodman, H. S. Leonard, “The calculus of individuals and its uses,” The Journal of Symbolic Logic, 5, 2 (1940), 45–55.
[30] P. Domingos, The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World (London: Penguin Books, 2015).
[31] M. Carpo, “Rise of the Machines,” Artforum, 3 (2020).