We live in a period of unprecedented proliferation of constructed, internally coherent virtual worlds, which emerge everywhere, from politics to video games. Our mediascape is brimming with rich, immersive worlds ready to be enjoyed and experienced, or decoded and exploited. One effect of this phenomenon is that we are now asking fundamental questions, such as what “consensus reality” is and how to engage with it. Another effect is that there is a need for a special kind of expertise that can deal with designing and organising these worlds – and that is where architects possibly have a unique advantage. Architectural thinking, as a special case of visual, analogy-based synthetic reasoning, is well positioned to become a crucial expertise, able to operate on multiple scales and in multiple contexts in order to map, analyse and organise a virtual world, while at the same time being able to introduce new systems, rules and forms to it.
A special case of this approach is something we can name architectural worldmaking, which refers broadly to practices of architectural design which wilfully and consciously produce virtual worlds, and understand worlds as the main project of architecture. Architects have a unique perspective and could have a say in how virtual worlds are constructed and inhabited, but there is a caveat which revolves around questions of agency, engagement and control. Worldmaking is an approach to learning from both technically-advanced visual and cultural formats such as video games, as well as scientific ways of imaging and sensing, in order to be able to construct new, legitimate, and serious ways of seeing and modelling.
These notions are central to the research seminar called “Games and Worldmaking”, first conducted by the author at SCI-Arc in summer of 2021, which focused on the intersection of games and architectural design, and foregrounded systems thinking as an approach to design. The seminar is part of the ongoing Views of Planet City project, in development at SCI-Arc for the Pacific Standard Time exhibition, which will be organised by the Getty Institute in 2024. In the seminar, we developed the first version of Planet Garden, a planetary simulation game, envisioned to be both an interactive model of complex environmental conditions and a new narrative structure for architectural worldmaking.
Planet Garden is loosely based on Edward O. Wilson’s “Half-Earth” idea, a scenario where the entire human population of the world occupies a single massive city and the rest is left to plants and animals. The Half Earth is an important and very interesting thought experiment, almost a proto-design, a prompt, an idea for a massive, planetary agglomeration of urban matter which could liberate the rest of the planet to heal and rewild.
The question of the game was, how could we actually model something like that? How do we capture all that complexity and nuance, how do we figure out stakes and variables and come up with consequences and conclusions? The game we are designing is a means to model and host hugely complex urban systems which unravel over time, while being able to legibly present an enormous amount of information visually and through the narrative. As a format, a simulation presents different ways of imaging the World and making sense of reality through models.
The work on game design started as a wide exploration of games and precedents within architectural design and imaging operations, as well as abstract systems that could comprise a possible planetary model. The question of models and modelling of systems comes at the forefront and becomes contrasted to existing architectural strategies of representation.
Mythologizing, Representing and Modelling
Among the main influences of this project were the drawings made by Alexander von Humboldt, whose work is still crucial for anyone with an interest in representing and modelling phenomena at the intersection of art and science. If, in the classical sense, art makes the world sensible while science makes it intelligible, these images are a great example of combining these forms of knowledge. Scientific illustrations, Humboldt once wrote, should “speak to the senses without fatiguing the mind”. His famous illustration of Chimborazo volcano in Ecuador shows plant species living at different elevations, and this approach is one of the very early examples of data visualisation, with an intent of making the world sensible and intelligible at the same time. These illustrations also had a strong pedagogical intent, a quality we wanted to preserve, and which can serve almost as a test of legibility.
The project started with a question of imaging a world of nature in the Anthropocene epoch. One of the reasons it is difficult to really comprehend a complex system such as the climate crisis is that it is difficult to model it, which also means to visually represent it in a legible way which humans can understand. This crisis of representation is a well-known problem in literature on the Anthropocene, most clearly articulated in the book Against the Anthropocene, by T.J. Demos.
We do not yet have the tools and formats of visualising that can fully and legibly describe such a complex thing, and this is, in a way, also a failure of architectural imagination. The standard architectural toolkit is limited and also very dated – it is designed to describe and model objects, not “hyperobjects”. One of the project’s main interests was inventing new modalities of description and modelling of complex systems through the interactive software format, and this is one of the ideas behind the Planet Garden project.
Contemporary representational strategies for the Anthropocene broadly fall into two categories, those of mythologising or objectivising. The first approach can be observed in the work of photographers such as Edward Burtynsky and Louis Helbig, where the subject matter of environmental disaster becomes almost a new form of the aesthetic sublime. The second strategy comes out of the deployment and artistic use of contemporary geospatial imaging tools. As is well understood by critics, contemporary geospatial data visualisation tools like Google Earth are embedded in a specific political and economic framework, comprising a visual system delivered and constituted by the post–Cold War and largely Western-based military-state-corporate apparatus. These tools offer an innocent-seeming picture that is in fact a “techno-scientific, militarised, ‘objective’ image”. Such an image displaces its subject and frames it within a problematic context of neutrality and distancing. Within both frameworks, the expanded spatial and temporal scales of geology and the environment exceed human and machine comprehension and thus present major challenges to representational systems.
Within this condition, the question of imaging – understood here as making sensible and intelligible the world of the Anthropocene through visual models – remains, and it is not a simple one. Within the current (broadly speaking) architectural production, this topic is mostly treated through the “design fiction” approach. For example, in the work of Design Earth, the immensity of the problem is reframed through a story-driven, narrative approach which centres on the metaphor, and where images function as story illustrations, like in a children’s book. Another approach is pursued by Liam Young, in the Planet City project, which focuses on video and animation as the main format. In this work, the imaging strategies of commercial science fiction films take the main stage and serve as anchors for the speculation, which serves a double function of designing a new world and educating a new audience. In both cases, it seems, the focus goes beyond design, as these constructed fictions stem from a wilful, speculative exaggeration of existing planetary conditions, to produce a heightened state which could trigger a new awareness. In this sense, these projects serve a very important educational purpose, as they frame the problem through the use of the established and accepted visual languages of storybooks and films.
The key to understanding how design fictions operate is precisely in their medium of production: all of these projects are made through formats (collage, storybook, graphic novel, film, animation) which depend on the logic of compositing. Within this logic, the work is made through a story-dependent arrangement of visual components. The arrangement is arbitrary as it depends only on the demands of the story and does not correspond to any other underlying condition – there is no model underneath. In comparison, a game such as, for example, SimCity is not a fiction precisely because it depends on the logic of a simulation: a testable, empirical mathematical model which governs its visual and narrative space. A simulation is fundamentally different from a fiction, and a story is not a model.
This is one of the reasons why it seems important to rethink the concept of design fiction through the new core idea of simulation. In the book Virtual Worlds as Philosophical Tools, Stefano Gualeni traces a lineage of thinking about simulations to Espen Aarseth’s 1994 text called Hyper/Text/Theory, and specifically to the idea of cybertextuality. According to this line of reasoning, simulations contain an element not found in fiction and thus need an ontological category of their own: “Simulations are somewhere between reality and fiction: they are not obliged to represent reality, but they have an empirical logic of their own, and therefore should not be called fictions.” This presents us with a fundamental insight into the use of simulations as the future of architectural design: they model internally coherent, testable worlds and go beyond mere fiction-making into worldmaking proper.
Simulations, games and systems
In the world of video games, there exists a genre of “serious” simulation games, which comprises games like Maxis software’s SimCity and The Sims, as well as some other important games like Sid Meier’s Civilization and Paradox Studio’s Stellaris. These games are conceptually very ambitious and extremely complex, as they model the evolution of whole societies and civilisations, operate on very long timescales, and consist of multiple nested models that simulate histories, economies and evolutions of different species at multiple scales. One important feature and obligation of this genre is to present a coherent, legible image of the world, to give a face to the immense complexity of the model. The “user interface” elements of these kinds of games work together to tell a coherent story, while the game world, rendered in full 3D in real time, provides an immersive visual and aesthetic experience for the player. Contrary to almost any other type of software, these interfaces are more indebted to the history of scientific illustration and data visualisation than they are to the history of graphic design. These types of games are open-ended and not bound to one goal, and there is rarely a clear win state.
Another feature of the genre is a wealth of underlying mathematical models, each providing for the emergence of complexity and each carrying its own assumptions and biases. For example, SimCity is well known (and some would say notorious) for its rootedness in Jay Forrester’s Urban Dynamics approach to modelling urban phenomena, which means that its mathematical model delivers very specific urban conditions – and ultimately, a very specific vision of what a city is and could be. One of the main questions in the seminar became how we might update this approach on two fronts: by rethinking the mathematical model, and by rethinking urban assumptions of the conceptual model.
The work of the game designer Will Wright, the main designer behind the original SimCity, as well as The Sims and Spore, is considered to be at the origin of simulation games as a genre. Wright has developed a vast body of knowledge on modelling simulations, some of which he presented in his 2003 influential talk at the Game Developers Conference (GDC), titled “Dynamics for Designers”. In this talk, Wright outlines a fully-fledged theory of modelling of complex phenomena for interactivity, focusing on topics such as “How we can use emergence to model larger possibility spaces with simpler components”. Some of the main points: science is a modelling activity, and until now, it has used traditional mathematics as its primary modelling method. This has some limits when dealing with complex dynamic and emergent systems. Since the advent of the computer, simulation has emerged as an alternative way of modelling. These are very different: in Wright’s view, maths is a more linear process, with complex equations; simulation is a more parallel process with simpler components interacting together. Wright also talks about stochastic (random probability distribution) and Monte Carlo (“brute force”) methods as examples of the simulation approach.
Wright’s work was a result of a deep interest in exploring how non-linear models are constructed and represented within the context of interactive video games, and his design approach was to invent novel game design techniques based directly on System Dynamics, a discipline that deals with the modelling of complex, unpredictable and non-linear phenomena. The field has its roots in the cybernetic theories of Norbert Wiener, but it was formalised and created in the mid-1950s by Professor Jay Forrester at MIT, and later developed by Donella H. Meadows in her seminal book Thinking in Systems.
System dynamics is an approach to understanding the non-linear behaviour of complex systems over time using stocks, flows, internal feedback loops, table functions and time delays.[14,15] Forrester (1918–2016) was an American computer engineer and systems scientist, credited as the founding father” of system dynamics. He started by modelling corporate supply chains and went on to model cities by describing “the major internal forces controlling the balance of population, housing and industry within an urban area”, which he claimed could “simulate the life cycle of a city and predict the impact of proposed remedies on the system”. In the book Urban Dynamics, Forrester had turned the city into a formula with just 150 equations and 200 parameters. The book was very controversial, as it implied extreme anti-welfare politics and, through its “objective” mathematical model, promoted neoliberal ideas of urban planning.
In another publication, called World Dynamics, Forrester presented “World2”, a system dynamics model of our world which was the basis of all subsequent models predicting a collapse of our socio-technological-natural system by the mid 21st century. Nine months after World Dynamics, a report called Limits to Growth was published, which used the “World3” computer model to simulate the consequences of interactions between the Earth and human systems. Commissioned by the Club of Rome, the findings of the study were first presented at international gatherings in Moscow and Rio de Janeiro in the summer of 1971, and predicted societal collapse by the year 2040. Most importantly, the report put the idea of a finite planet into focus.
The main case study in the seminar was Wright’s 1990 game SimEarth, a life simulation video game in which the player controls the development of a planet. In developing SimEarth, Wright worked with the English scientist James Lovelock, who served as an advisor and whose Gaia hypothesis of planetary evolution was incorporated into the game. Continuing the systems dynamics approach developed for SimCity, SimEarth was an attempt to model a scientifically accurate approximation of the entire Earth system through the application of customised systems dynamics principles. The game modelled multiple interconnected systems and included realistic feedback between land, ocean, atmosphere, and life itself. The game’s user interface even featured a “Gaia Window”, in direct reference to the Gaia theory which states that life plays an intimate role in planetary evolution and the regulation of planetary systems.
One of the tutorial levels for the SimEarth featured a playable model of Lovelock’s “Daisyworld” hypothesis, which postulates that life itself evolves to regulate its environment, forming a feedback loop and making it more likely for life to thrive. During the development of a life-detecting device for NASA’s Viking lander mission to Mars, Lovelock made a profound observation, that life tends to increase the order of its surroundings, and that studying the atmospheric composition of a planet will provide evidence enough of life’s existence. Daisyworld is a simple planetary model designed to show the long-term effects of coupling and interdependence between life and its environment. In its original form, it was introduced as a defence against criticism that his Gaia theory of the Earth as a self-regulating homeostatic system requires teleological control rather than being an emergent property. The central premise, that living organisms can have major effects on the climate system, is no longer controversial.
In SimEarth, the planet itself is alive, and the player is in charge of setting the initial conditions as well as maintaining and guiding the outcomes through the aeons. Once a civilisation emerges, the player can observe the various effects, such as the impacts of changes in atmospheric composition due to fossil fuel burning, or the temporary expansion of ice caps in the aftermath of a major nuclear war. SimEarth’s game box came with a 212-page game manual that was at once a comprehensive tutorial on how to play and an engrossing lesson in Earth sciences: ecology, geology, meteorology and environmental ethics, written in accessible language that anyone could understand.
SimEarth and other serious simulation games in general represent a way that games could serve a function of public education while remaining a form of popular entertainment. This genre also represents an incredible validation of claims that video games can be valuable cultural artifacts. Ian Bogost writes: “This was a radical way of thinking about video games: as non-fictions about complex systems bigger than ourselves. It changed games forever – or it could have, had players and developers not later abandoned modelling systems at all scales in favor of representing embodied, human identities.”
Lessons that architectural design can learn from these games are many and varied, the most important one being that it is possible to think about big topics by employing models and systems while maintaining an ethos of exploration, play and public engagement. In this sense, one could say that a simulation game format might be a contemporary version of Humboldt’s illustration, with the added benefit of interactivity; but as we have seen, there is a more profound, crucial difference – this format goes beyond just a representation, beyond just a fiction, into worldmaking.
As a result of this research, the students in the seminar utilised Unreal Engine to create version one (v.1) of Planet Garden, a multi-scalar, interactive, playable model of a self-sustaining, wind and solar-powered robotic garden, set in a desert landscape. The simulation was envisioned as a kind of reverse city builder, where a goal of the game is to terraform a desert landscape by deploying different kinds of energy-producing technologies until the right conditions are met for planting and the production of oxygen. The basic game loop is based on the interaction between the player and four main resources: energy, water, carbon, and oxygen. In the seminar, we also created a comprehensive game manual. The aims of the project were to learn how to model dynamic systems and to explore how game workflows can be used as ways to address urban issues.
Planet Garden is projected to become a big game for the Getty exhibition; a simulation of a planetary ecosystem as well as a city for 10 billion people. We aim to model various aspects of the planetary city, and the player will be able to operate on multiple spatial sectors and urban scales. The player can explore different ways to influence the development and growth of the city and test many scenarios, but the game will also run on its own, so that the city can exist without direct player input. Our game utilises core design principles that relate to system dynamics, evolution, environmental conditions, and change. A major point is the player’s input and decision-making process, which influence the outcome of the game. The game will also be able to present conditions and consequences of this urban thought experiment, as something is always at stake for the player.
The core of the simulation-as-a-model idea is that design should have testable consequences. The premise of the project is not to construct a single truthful, total model of an environment but to explore ways of imaging the world through simulation and open new avenues for holistic thinking about interdependence of actors, scales and world systems. If the internet ushered a new age of billions of partial identarian viewpoints, all aggregating into an inchoate world gestalt, is it a time to rediscover a new image of the interconnected world?
 For a longer discussion on this, see O. M. Ungers, City Metaphors, (Cologne: Buchhandlung Walther Konig, 2011). For the central place of analogies in scientific modeling, see M. Hesse, Models and Analogies in Science, and also Douglas Hofstadter, Surfaces and Essences: Analogy as the Fuel and Fire of Thinking (Basic Books, 2013).
 The term “worldmaking” comes from Nelson Goodman’s book Ways of Worldmaking, and is used here to be distinguished from worldbuilding, a more narrow, commercially oriented term.
 For a great introduction to the life and times of Alexander Von Humboldt, see A. Wulf, The Invention of Nature: Alexander von Humboldt’s New World (New York: Alfred A. Knopf, 2015).
 Quoted in H. G. Funkhouser, “Historical development of the graphical representation of statistical data”, Osiris 3 (1937), 269–404.
 T. J. Demos, Against The Anthropocene (Berlin: Sternberg Press, 2016).
 T. J. Demos, Against The Anthropocene (Berlin: Sternberg Press 2016).
 Design Earth, Geostories, The Planet After Geoengineering (Barcelona: Actar, 2019 and 2021).
 L. Young, Planet City, (Melbourne: Uro Publications, 2020).
 For an extended discussion of the simulation as a format, see D. Jovanovic, “Screen Space, Real Time”, Monumental Wastelands 01, eds. D. Lopez and H. Charbel (2022).
 S. Gualeni, Virtual Worlds as Philosophical Tools, (Palgrave Macmillan, 2015)
 For an extended discussion on this, see Clayton Ashley, The Ideology Hiding in SimCity’s Black Box, https://www.polygon.com/videos/2021/4/1/22352583/simcity-hidden-politics-ideology-urban-dynamics
 W. Wright, Dynamics for Designers, GDC 2003 talk, https://www.youtube.com/watch?v=JBcfiiulw-8.
 D. H. Meadows, Thinking in Systems, (White River Junction: Chelsea Green Publishing, 2008).
 Arnaud M., “World2 model, from DYNAMO to R”, Towards Data Science, 2020, https://towardsdatascience.com/world2-model-from-dynamo-to-r-2e44fdbd0975.
 Wikipedia, “System Dynamics”, https://en.wikipedia.org/wiki/System_dynamics.
 Forrester, Urban Dynamics (Pegasus Communications, 1969).
 K. T. Baker, “Model Metropolis”, Logic 6, 2019, https://logicmag.io/play/model-metropolis.
 I. Bogost, “Video games Are Better Without Characters”, The Atlantic (2015), https://www.theatlantic.com/technology/archive/2015/03/video-games-are-better-without-characters/387556.
Welcome to Prospectives!
“Half an acre of oblong pond – one that is open as a mirror,
in it, the light of sky and shadow of clouds co-linger.
One asks: how can it be so clear?
For there is a source of living water.”
– Zhu Xi (1130–1200 AD), GUAN SHU YOU GAN (“Two Thoughts from Reading Books at Living Water Pavilion”: PART I)
“Good rain knows the season, when spring is here.
It sneaks into the night wind, moistening things fine and silently.”
– Du Fu (712–770 AD), “Delighting in Rain on a Spring Night”
大學之道，在明明德，在親民，在止於至善。 … 物格而後知至；知至而後意誠；意誠而後心正；心正而後身修；身修而後家齊；家齊而後國治；國治而後天下平。自天子以至於庶人，壹是皆以修身為本。
“The way of great learning consists in manifesting one’s bright virtue, consists in loving the people, consists in stopping in perfect goodness. … When things are investigated, knowledge is extended. When knowledge is extended, the will becomes sincere. When the will is sincere, the mind is correct. When the mind is correct, the self is cultivated. When the self is cultivated, the clan is harmonised. When the clan is harmonised, the country is well governed. When the country is well governed, there will be peace throughout the land. From the king down to the common people, all must regard the cultivation of the self as the most essential thing.”
– The Great Learning, The Book of Rites (770–476/403 BC) (Translated by A. Charles Muller, July 4, 1992)
With this trilogy of excerpts, I sincerely welcome you to another issue of Prospectives: a literary platform that is free and open to all. As a lecturer of History and Theory at the B-pro, I am grateful to say that I have the best of teachers – the consolidation of thousands of years of world history and theory – and I hope that Prospectives’ readers can and will also learn from the best. With The Bartlett’s efforts in promoting equality, diversity and inclusivity (EDI), we always encourage students to embed their own cultural ontology in their study; interculturality and interdisciplinarity are novelty in research, and add to the efforts in spawning shared cultural expressions and mutual respect through reciprocal understanding.
Searching through my own culture, the three excerpts above – respectively from the 12th century AD, the 8th century AD, and the 8th century BC – are chosen because of their timelessness. On the other hand, matters of open-sourcing, education, co-learning and self-cultivation are as timely as ever; traditional institutions are simultaneously challenged and complemented by new ways of learning.
The first excerpt is a metaphorical poem of Chinese landscapes (借景喻理), taking an open pond as an analogy for a clear mind, able to reflect as clearly as a mirror. How can the mind be clear? “For there is a source of living water” – which speaks to me of open sourcing.
At the same time, the clearest mirror of all is history (以史為鏡):
“Taking people as a mirror, you can understand the pros and cons; taking history as a mirror, you can know the ups and downs.”
– (Emperor Taizong of Tang, 598–649 AD)
In more recent history, when Martin Heidegger was interviewed for Der Spiegel in 1966, he said that “academic ‘freedom’ was only too often a negative one: freedom from the effort to surrender oneself to what a scientific study demands in terms of reflection and meditation.” To reverse engineer this, then, a positive freedom demands reflection and meditation. Coming from a philosopher who is famous for his reflections and meditations on a hammer and its relationship to “being”, his thinking testifies that “when things are investigated, knowledge is extended”. What is the value of extending knowledge? Sincerity, correct minds, cultivated self, harmony in governance, and peace: “From the king down to the common people, all must regard the cultivation of the self as the most essential thing.” In other words, investigate things so that we may know how to be in this world. Such is the urgency in our epoch of climate change, which demands collective reflections and meditations – or co-learning.
Lastly, what determines good education? Good education is like fine rain in springtime: it comes at the right season; not early, not late – it teaches according to each individual’s aptitude and tempo (因材施教). It washes and enriches, quiet and non-clamorous – it teaches by example, beyond the verbal (身教重於言教). It is fine and gentle, it cultivates the environment, day and night – so that knowledge and virtues may immerse the ears and imbue the eyes (耳濡目染).
Issue 03: Climate F(r)ictions
Following those reflections on rain, ponds, and water, perhaps there is no better segue to the discussion of Climate F(r)iction – a polysemy of climate friction and fictions (Cli-Fi). According to a journal article published in 2003 by B. Levrard and J. Laskar, “[d]elayed responses in the [ice/water] mass redistribution may introduce a secular term in the obliquity evolution, a phenomenon called ‘climate friction’”. Although this piece of research was investigating the Earth’s major glacial episodes, which took place on a geologic timescale, it nevertheless warns us that the consequences of our actions may lead to immediate effects on a planetary scale, and of a magnitude beyond the imagination of any Cli-Fi.
Curated by our very own Déborah López and Hadin Charbel at the B-pro, “Climate F(r)iction” is an issue that looks at the intersection of ecologies, technologies, and ideologies. López and Charbel, who are architects and founders of the Pareid studio, lead Research Cluster 1 “Monumental Wastelands” at the B-pro, which focuses on cli-migration and autonomous ecologies, using methods of “decoding” and “recoding” through Cli-Fi.
In the production of this issue, an exceptional panel of guests were invited to participate in an open-seminar and roundtable on 27 April 2022 at the Bartlett B-pro. The work and methodologies which they have used to scrutinise, communicate, and respond to our techno-climatic future(s) were incredibly diverse, and yet, their combined contributions reminded me, above all, of a line spoken by Rufus Scrimgeour: “These are dark times, there is no denying. Our world has perhaps faced no greater threat than it does today.” These words may have been spoken in a work of fiction and in an entirely different context, but despite this, the sentiment should not be taken lightly.
I have here tried to curb my own tendency to assemble hopelessly long lists of acknowledgments – Prospectives is blessed to have been indulged by numerous supporters – but as those who have contributed to Prospectives and the B-pro continue to serve relentlessly, please do refer to the acknowledgements in Issue 02.
Nevertheless, I must give thanks once again to those who have strived and delivered within the timeline, especially our authors, curators, advisory board members, copyeditor and proof-reader Dan Wheeler, web-developer Arjun Harrison-Mann, our research assistants, and all the professional services staff. Most important of all, our internal senior advisors – Professor Mario Carpo, Professor Frédéric Migayrou, Roberto Bottazzi, Andrew Porter, Gilles Retsin, and Professor Bob Sheil – without whom Prospectives would not have been possible. Last but not least, our Managing Editor Mollie Claypool, who has made the ground fertile for the germination and growth of ideas.
Prospectives has been generously supported by our subscribers and readers, as well as the Architecture Projects Fund (The Bartlett School of Architecture, University College London), which enables authors and readers to publish and access knowledge free of charge. With this, I shall leave you to enjoy the third issue of Prospectives: Climate F(r)ictions.
Welcome to Prospectives Issue 02
It’s been a great pleasure to be part of Prospectives – a journal that is dedicated to all researchers and designers, students and scholars, established or in their early careers. It aims to act as a hotbed, a sandbox, a platform that is “from architects, by architects, to architects” in its broadest sense – be it architects of buildings, softwares, or future(s) (or the Matrix!). It is for all who are invested in interdisciplinary and intercultural exchanges, information and idea seeding.
According to Oxford Languages, the term “Prospective” emerged in the late 16th century, with a meaning of “looking forward, foresighting”, or “characterised by looking to the future”. The journal’s title puts the anticipatory nature of Prospective(s) into plural form; we believe “design” is the maximising of options or, as Claude Shannon put it “surprises” in a system; and the realisation of design is the collapse / negotiation / collaboration of all such possibilities into our physical reality. When the word “prospect” is translated into other languages, like my mother tongue Chinese, it adds yet another layer of meaning. The first result that Google turned up was “奔頭兒” (rushing-heads), an expression much used by local dialects in the North-East of China to describe the hard work needed to secure a promising future. Different languages and cultures map the vibrancy of Prospectives, and also of architecture and world-building. One is simultaneously enabled and constrained by the language which structures our thinking, be it architectural, mathematical or natural languages; this is why collaboration, or a collaborative intelligence, is our biggest prospect. The greatest innovations are the ones characterised by inclusivity, not exclusivity.
Within such a context, what is the role of a journal? To ensure standards in research? To network scholars in the field? To communicate progress with the larger public? We have seen an increasing number of open source journals that are revolutionising the peer review system; not to replace it, but diversifying what can be meant by peer-to-peer (p2p). At Prospectives, we are invested in democratisation, especially in helping independent authors and designers reach a larger audience, and making literature available and accessible to all through participation and digitalisation. The future of journals (and architecture), is certainly one that can synthesise the copyrights and “copylefts”. As Prof. Mario Carpo suggests, while the marginal costs of printing (be it 2D or 3D) decrease, our capacities in mass customisation increase, and the same applies to information production. With the rise of the Omniverse, Metaverse, and MetaNets, it becomes increasingly apparent that the answer is not in the technologies themselves, but the way the social and the economic are re-structured, driven by participatory innovation. It will take the invisible (or visible) hands of the many to steer us towards the prospectives we desire.
Issue 02: The Algorithmic form
“Algorithm” as the adjective, “form” as the subject – connecting fundamental questions in computation to architecture. The second issue of Prospectives is driven by the provocations of the essay “Computational Tendencies”, written in 2020 by Alessandro Bava – who is also the guest curator of this issue. He problematised evolutionary thinking in architecture – the linear and unidirectional development from simplicity to complexity, from causation to correlation, from small to big data – and questioned the prospects of algorithms and forms within social and cultural urgencies. In the search for answers that are likely to fall between established fields, Alessandro invited six architects to engage in conversation with great figures from the fields of art, architecture and computation. Some of these conversations are carried out through interviews and roundtables, others through research, literature and case studies, forming dialogues between the past and present. Together with this, an open call was established to crowd-source intelligence and outsource imagination. These critical and retrospective pieces map a speculative timeline of events around “algorithmic forms” from Italian Renaissance, through the beginning of modernism, up to today.
Prospectives Issue 02 encompasses 14 contributions. Prof. Mario Carpo starts our journey with an analogy of the German language, where grammar is “an artificial shortcut” to fluency, not its entirety. The same logic may apply to “Shape Grammar” in architecture, or the Common Data Environments of BIM, or the big-databases of Artificial Intelligence (AI). Just as he exquisitely formed a connection between the invention of book-printing and 3D-printing to predict a future of mass customisation, in this piece Mario shows us a comparative history between citationists of the Renaissance and post-modern (PoMo) architecture. The former is invested in reviving classical antiquity “piece-by-piece”, while the latter took its cues from “reference, allusion, collage and cut-and-paste”. We are also indulged with the distinguished curator Hans Ulrich Obrist’s interview with Getulio Alviani – an important figure in the international Optical-kinetic art movement throughout the 20th century. Alviani spoke of being motivated by the work of Leonardo Da Vinci; his geometric exploration arising from the “curiosity of seeing”; the tectonics between material and structure, craft and design, and finally, the immersivity of movement with the “discovery of light”. This precious and poetic piece teleported us to the Italian art scene through Alviani’s encounters, provoking us to reflect on our journey from simplicity to complexity.
The five pieces that follow are the outcome of the B-pro Open Seminar at the Bartlett School of Architecture on 8th December, 2021. Five invited guests, including Roberto Bottazzi (The Bartlett), Francesca Gagliardi and Federico Rossi (Fondamenta), Philippe Morel (ENSA Paris-Malaquais & The Bartlett), Marco Vanucci (Open Systems), and myself (Provides Ng, The Bartlett) were invited to contemplate on and discuss the work of Luigi Moretti, Isa Genzken, Manfred Mohr, and Leonardo and Laura Mosso – important figures who had shown us new forms of aesthetics through the exploration of novel technological, geometrical, and mathematical tools. The roundtable that followed included discussions on, but not limited to, topics in Building Information Modelling (BIM), AI, blockchain, robotics, extended reality (XR) and other distributive technologies that, undeniably, should be brought to the table for their symbiosis and socioeconomic implications, positive or negative.
Lastly, the richness of this issue is further complemented by five selected open call pieces, with topics ranging from architectural authorship, algorithmic representations, digital anthropology, computational empiricism, and the liberation of creativity through codification.
Prospectives hopes to uncover the urgency around issues of computation and automation within the built environment, but also the communities and initiatives that are involved in such developments; from the Bartlett School of Architecture, UCL, reaching out to wider society across disciplinary and territorial borders.
First and foremost, I owe thanks to Prof. Frederic Migayrou, who is chair of the school, director and founder of the B-pro – five exciting programs led by an international and interdisciplinary team of faculty members, which have shown the field diverse paths to architecture and education, a shelter for all who strive for “prospects”. And to Prof. Mario Carpo, a historian, a critic, a theorist, who has liberated my thinking and shown us a form of architecture that is so much more than design; a form of architect that is so much more than a builder; a form of speculation that is so much more than fiction; a form of prospect that is so much more than futuring. Mario and Frederic were my supervisors, patiently guiding me through a marvellous history of Architecture & Digital Theory; a history that has become a rock in my heart – even though the prospects of the future are not always clear, history has prevented me from confusing and losing myself, and urged me to write and research with honesty, and I hope this journal can do the same for its readers. And of course, Mollie Claypool, a dedicated advocate, a female theorist, my role model. A strong figure with a soft heart, she will always fight and speak up for, in her words, “a labour of love and perseverance”, spearheading participatory and collaborative practises in automation, design and research, and the launch of this very journal. Also Roberto Bottazzi and Gilles Retsin, programme directors of Urban Design (UD) and Architecture Design (AD) in B-pro, together with Mollie, have given me so much opportunity, trust, advice and support, facilitating a free platform of architectural expression and a warm hub of design innovation. Prof. Bob Sheil and Andrew Porter, who have relentlessly endorsed and formalised the development of Prospectives and all other initiatives within the School of Architecture, facilitating a welcoming hotbed for creativity, self-initiation and self-organisation.
I am thankful to all those who are my colleagues, but also my mentors, including Alessandro Bava, who have curated this issue with much sincerity and commitment, bringing an amazing line up of guests and design provocations to the table; Déborah López Lobat, Hadin Charbel, Manuel Jimenez, Emmanouil Zaroukas, Clara Jaschke, Mark Garcia, Jordi Vivaldi Piera and Albert Brenchat-Aguilar, with whom I’ve had some of the most engaging and interesting disciplinary discussions and who have never hesitated to reach out a helping hand; Daniel Koehler, Valentina Soana, and all Prospectives advisory board members. Above all, Alberto Fernandez Gonzalez and David Doria; my strongest backers, my faithful ear, my collaborative hands, my motivation and my exemplars, it is my honour and blessing to be amongst such fellowship and companionship. Needless to say, we would be nothing without our communication and administration teams, the invisible heroes who have supported the running of the school, especially Drew Pessoa, Tom Mole, Ruth Evison, Gen Williams, Srijana Gurung, Abi Luter, Dragana Krsic, Sarah Barry, Jessica Buckmire, Julia Samuels, and Crystal Tung. Last but not least, Rebecca Sainsot and Dan Wheeler, who assisted the publication and copy editing of this issue with such dedication, and to those who have submitted and contributed to our open call. I am grateful to all schools of architecture, like the Bartlett, that have enabled and facilitated projects such as Prospectives, opportunities for early-career and independent scholars, and a place for aspiring talents to meet and grow.
Welcome to Prospectives.
Prospectives is an open-access online journal dedicated to the promotion of innovative historical, theoretical and design research around architectural computation, automation and fabrication technologies published by B–Pro at The Bartlett School of Architecture, UCL. It brings the most exciting, cutting-edge exploration and research in this area onto a global stage. It also aims to generate cross-industry and cross-disciplinary dialogue, exchange and debate about the future of computational architectural design and theoretical research, linking academic research with practice and industry.
Featuring emerging talent and established scholars, as well as making all content free to read online, with very low and accessible prices for purchasing issues, Prospectives aims to unravel the traditional hierarchies and boundaries of architectural publishing. The Bartlett supports a rich stream of theoretical and applied research in computational design, theory and fabrication. We are proud to be leading this initiative via an innovative, flexible and agile website. Computation has changed the way we practice, and the theoretical constructs we use – as well as the way we publish.
Prospectives has been designed to be a part-automated, part-human, multiplicitous platform. You may come across things when using it that do not feel, well, quite human. You may not realise at first that you are looking at something produced by automation. And because every issue is unique yet sitting within a generative framework this may mean you see the automation behind Prospectives do things that humans may not do.
Furthermore how you engage with Prospectives is largely left up to the reader. You can read our guest-curated issue, and use the tags to generate your own unique issue – an ‘issue within an issue’ – or read individual articles. You can also suggest new tags to be adopted by articles. We hope this provokes new ways of thinking about the role that participation, digitisation and automation can play in architectural publishing. Prospectives in a work-in-progress, and its launch is the first step towards fulfilling a vision for new kinds of publishing platforms for architecture that play with, and provoke, the discourse on computation and automation in architectural design and theory research.
Issue 01: Mereologies
“Mereologies”, or the plural form of being ‘partly’, drives the explorations bundled in the first issue of Prospectives, guest curated by Daniel Koehler, Assistant Professor at University of Texas at Austin, previously a Teaching Fellow at The Bartlett School of Architecture from 2016 to 2019.
Today, architects can design directly with the plurality of parts that a building is made of due to increased computational power. What are the opportunities when built space is computed part-to-part? Partly philosophy, computation, sociology ecology and partly architecture, each text – or “mereology” – contributes a particular insight on part relations, linking mereology to peer-to-peer approaches in computation, cultural thought, and built space. First substantiated in his PhD at the University of Innsbruck, published in 2016 as The Mereological City: A Reading of the Works of Ludwig Hilberseimer (transcript), Daniel’s work on mereology and part-hood – as an nuanced interplay and blurring between theory and design – has been pivotal in breeding the ground for an emerging generation of architects interested in pursuing a new ethical and social project for the digital in architecture. The collection of writings curated here included postgraduate architecture and urban design students (both his own, and others), architecture theorists, designers, philosophers, computer scientists and sociologists. The interdisciplinary nature of this issue demonstrates how mereology as a subject area can further broaden the field of architecture’s boundaries. It also serves as a means of encapsulating a contemporary cultural moment by embedding that expanding field in core disciplinary concerns.
The contributions were informed by research and discussions in the Bartlett Prospectives (B-Pro) at The Bartlett School of Architecture, UCL London, from 2016 to 2019, culminating in an Open Seminar on mereologies, which took place on 24 April 2019 as part of the Prospectives Lecture Series in B-Pro. Contributors to this issue include: Jordi Vivaldi, Daniel Koehler, Giorgio Lando, Herman Hertzberger, Anna Galika, Hao Chen Huang, Sheghaf Abo Saleh, David Rozas, Anthony Alvidrez, Shivang Bansal and Ziming He.
Prospectives has been a work-in-progress for almost 10 years. The dream of Professor Frédéric Migayrou (Chair of School and Director of B–Pro at The Bartlett School of Architecture) when he arrived at The Bartlett in 2011, I became involved in the project when I joined the School 1 year later. It has been a labour of love and perseverance since. It is due to the fervent and ardent support of Frédéric, Professor Bob Sheil (Director of School), and Andrew Porter (Deputy Director of B–Pro) that this project later received funding in 2018 to formalise the development of Prospectives. To the B–Pro Programme Directors Professor Mario Carpo, Professor Marcos Cruz, Roberto Bottazzi, Gilles Retsin and Manuel Jimenez: I am thankful for your guidance, advice and friendship which has been paramount to this project. Colleagues such as Barbara Penner, Yeoryia Manolopoulou, Barbara Campbell-Lange, Matthew Butcher, Jane Rendell, Claire McAndrew, Clara Jaschke and Sara Shafei have all given me an ear (or a talking to!) at various stages when this project most needed it.
Finally, it is important to say that schools of architecture like the Bartlett have cross-departmental and cross-faculty teams who are often the ones who breed the ground for projects such as Prospectives to be possible. The research, expertise and support of Laura Cherry, Ruth Evison, Therese Johns, Professor Penelope Haralambidou, Manpreet Dhesi, Professor Laura Allen, Andy O’Reilly, Gill Peacock, Sian Lunt and Emer Girling has been vital – thank you.
The design research presented here aims to develop a design methodology that can compute an architecture that participates within the new digital economy. As technology advances, the world needs to quickly adapt to each new advancement. Since the turn of the last century, technology has integrated itself within our everyday lives and deeply impacted the way in which we live. This relationship has been defined by TM Tsai et al. as “Online to Offline” or “O2O” for short. What O2O means is defining virtually while executing physically, such as platform-based companies like Uber, AirBnb, and Groupon do. O2O allows for impact or disruption of the physical world to be made within the digital world. This has significantly affected economies around the world.
Paul Mason outlined in Post Capitalism: A Guide to our Future (2015) that developments in technology and the rise of the internet have created a decline in capitalism, which is being replaced by a new socio-economic system called “Post Capitalism”. As Mason describes,“technologies we’ve created are not compatible with capitalism […] once capitalism can no longer adapt to technological change”. Traditional capitalism is being replaced by the digital economy, changing the way products are produced, sold and purchased. There is a new type of good which can be bought or sold: the digital product. Digital products can be copied, downloaded and moved an infinite number of times. Mason states that it is almost impossible to produce a digital product through a capitalist economy due to the nature of the digital product. An example he uses is a program or software that can be changed throughout time and copied with little to no cost. The original producer of the product cannot regain their cost as one can with a physical good, leading to traditional manufacturers losing income from digital products. With the increase in digital products, the economy must be adapted.
In The Second Digital Turn (2017) Mario Carpo describes this phenomenon, stating that digital technologies are creating a new economy where production and transactions are done entirely algorithmically, and as a result are no longer time-consuming, labour intensive or costly. This leads to an economy which is constantly changing and adapting to the current status of the context in which it is in. Carpo describes the benefits of the digital economy as the following: “[…] it would appear that digital tools may help us to recreate some degree of the organic, spontaneous adaptivity that allowed traditional societies to function, albeit messily by our standards, before the rise of modern task specialisation.”
It is useful to look at the work of Kurt Gödel and his theorems for mathematical logic, which are the basis for computational logic. In his first theorem the term “axioms” is presented, which are true statements that can be proven as true. The theorem states that “If axioms do not contradict each other and are ‘listable’ some statements are true but cannot be proved.” This means that any system based on mathematical statements, axioms, cannot prove everything unless additional axioms are added to the list. From this Gödel describes his second theorem, “A system of axioms cannot show its inconsistency.” To relate this to programming, axioms can be seen as similar to code, yet everything cannot be proven from a single system of code.
Allen Turing’s work on computable numbers is a result of these two theorems by Gödel. Turing was designing a rigorous notion of effective computability based on the “Turing Machine”. The Turing Machine was to process any given information based on a set of rules, or a programme the machine follows, provided by the user for a specified intention. The machine is fed with an infinitely long tape, divided into squares, which contains a sequence of information. The machine would “scan” a symbol, “read” the given rules, “write” an output symbol, and then move to the next symbol. As Turning described, the “read” process refers back to the rule set provided: the machine would look through the rules, find the scanned symbol, then proceed to follow the instructions of the scanned symbol. The machine then writes a new symbol and moves to a new location, repeating the process over and over until it is told to by the ruleset to halt or stop the procedure and deliver an output. Turing’s theories laid down the foundation for the idea of a programmable machine able to interpret given information based on a given programme.
When applying computational thinking to architecture, it becomes evident that a problem based in the physical requires a type of physical computation. By examining the work of John von Neumann in comparison with Lionel Sharples Penrose the difference between the idea of a physical computational machine and a traditional automata computation can be explored. In Arthur W. Burks’s essay ‘Von Neumann’s Self-Reproducing Automata’ (1969) he describes von Neumann’s idea of automata, or the way in which computers think and the logic to how they process data. Von Neumann developed simple computer automata that functioned on simple switches of “and”, “or”, and “not”, in order to explore how automata can be created that are similar to natural automata, like cells and a cellular nervous system, making the process highly organic and with it the ability to compute using physical elements and physical data. Von Neumann theorised of a kinetic computational machine that would contain more elements than the standard automata, functioning in a simulated environment. As Burks describes, the elements are “floating on the surface, […] moving back and forth in random motion, after the manner of molecules of a gas.” As Burks states, von Neumann utilised this for “the control, organisational, programming, and logical aspects of both man-made automata […] and natural systems.”
However this poses issues around difficulty of control, as the set of rules are simple but incomplete. To address this von Neumann experimented with the idea of cellular automata. Within cellular automata he constructs a series of grids that act as a framework for events to take place, or a finite list of states in which the cell can be. Each cell’s state has a relation to its neighbours. As states change in each cell, this affects the states of each cell’s neighbour. This form of automata constructs itself entirely on a gridded and highly strict logical system.
Von Neumann’s concept for kinetic computation was modelled on experiments done by Lionel Sharples Penrose in 1957. Penrose experimented with the intention of understanding how DNA and cells self-replicate. He built physical machines that connected using hooks, slots and notches. Once connected the machines would act as a single entity, moving together forming more connections and creating a larger whole. Penrose experimented with multiple types of designs for these machines. He began with creating a single shape from wood, with notches at both ends and an angled base, allowing the object to rock on each side. He placed these objects along a rail, and by moving the rail forwards and backwards the objects interacted, and, at certain moments, connected. He designed another object with two identical hooks facing in opposite directions on a hinge. As one object would move into another, the hook would move up and interlock with a notch in the other element. This also allowed for the objects to be separated. If three of these objects were joined, and a fourth interlocked at the end, the objects would split into two equal parts. This enabled Penrose to create a machine which would self-assemble, then when it was too large, it would divide, replicating the behaviours of cellular mitosis. These early physical computing machines would operate entirely on kinetic behaviour, encoding behaviours within the design of the machine itself, transmitting data physically.
Experimenting with Penrose: Physical Computation
The images included here are of design research into taking Penrose objects into a physics engine and testing them at a larger scale. By modifying the elements to work within multiple dimensions, certain patterns and groupings can be achieved which were not accessible to Penrose. Small changes to an element, as well as other elements in the field, affect each other in terms of how they connect and form different types of clusters.
In Figure X, there is a spiralling hook. Within the simulations the element can grow in size, occupying more area. It is also given a positive or negative rotation. The size of the growth represents larger architectural elements, and thus takes more of the given space within the field. This leads to a higher density of elements clustering. The rotation of the spin provides control over what particular elements will hook together. Positive and positive rotations will hook, as well as negative and negative ones, but opposite spins will repeal each other as they spin.
Through testing different scenarios, formations begin to emerge, continuously adapting as each object is moving. At a larger scale, how the elements will interact with each other can be planned for spatially. In larger simulations certain groupings can be combined together to create larger formations of elements connected through strings of hooked elements. This experimentation leads towards a new form of architecture referred to as “codividual architecture”, or a computable architectural space created using the interaction and continuous adaptation of spatial elements. The computation of space occurs when individual spaces fuse together, therefore becoming one new space indistinguishable from the original parts. This process continues, allowing codividual architecture of constant change and adaptability.
Codividual spaces can be further supported by utilising machine learning, which computes parts at the moment they fuse with other parts, the connection of spaces, the spaces that change, and how parts act as a single element once fused together. This leads to almost scaleless spatial types of infinite variations. Architectural elements move in a given field and through encoded functions – connect, move, change and fuse. In contrast to what von Neumann was proposing, where the elements move randomly similar to gaseous molecules, these elements can move and join based on an encoded set of rules.
Within this type of system that merges together principles of von Neumann’s automata with codividuality, traditional automata and state machines can be radically rethought by giving architectural elements the capacity for decision making by using machine learning. The elements follow a set of given instructions but also have additional knowledge allowing them to assess the environment in which they are placed. Early experiments, shown here in images of the thesis project COMATA, consisted of orthogonal elements that varied in scale, creating larger programmatic spaces that were designed to create overlaps, and interlock, with the movement of the element. The design allowed for the elements to create a higher density of clustering when they would interlock in comparison to a linear, end-to-end connection.
This approach offers a design methodology which takes into consideration not only the internal programme, structure and navigation of elements, but the environmental factors of where they are placed. Scale is undefined and unbounded: each part can be added to create new parts, with each new part created as the scale grows. Systems adapt to the contexts in they are placed, creating a continuous changing of space, allowing for an understanding of the digital economics of space in real time.
 T. M. Tsai, P. C. Yang, W. N. Wang, “Pilot Study toward Realizing Social Effect in O2O Commerce Services,” eds. Jatowt A. et al., Social Informatics, 8238 (2013).
 P. Mason, Postcapitalism: A Guide to Our Future, (Penguin Books, 2016), xiii.
 Ibid, 163.
 M. Carpo, The Second Digital Turn: Design Beyond Intelligence (Cambridge, Massachusetts: MIT Press, 2017), 154.
 P. Millican, Hilbert, Gödel, and Turing [Online] (2019), http://www.philocomp.net/computing/hilbert.htm, last accessed May 2 2019.
 A. Turing, “On Computable Numbers, with an Application to the Entscheidungsproblem,” Proceedings of the London Mathematical Society, 1, 2-42, (1937), 231-232.
 A. W. Burks, Von Neumann's Self-Reproducing Automata; Technical Report (Ann Arbor: The University of Michigan, 1969), 1.
 A. W. Burks, Essay on Cellular Automata, Technical Report (Urbana: The University of Illinois Press, 1970), 5.
 A. W. Burks, Essay on Cellular Automata, Technical Report (Urbana: The University of Illinois Press, 1970), 7-8.
 L. S. Penrose, “Self-Reproducing Machines,” Scientific American, 200 (1959), 105-114.
Parts, chunks, stacks and aggregates are the bits of computational architecture today. Why do mereologies – or buildings designed from part-to-whole – matter? All too classical, the roughness of parts seems nostalgic for a project of the digital that aims for dissolving building parts towards a virtual whole. Yet if parts shrink down to computable particles and matter, and there exists a hyper-resolution of a close to an infinite number of building parts, architecture would dissolve its boundaries and the capacity to frame social encounters. Within fluidity, and without the capacity to separate, architecture would not be an instrument of control. Ultimately, freed from matter, the virtual would transcend from the real and form finally would be dead. Therein is the prospect of a fluid, virtual whole.
The Claustrophobia of a City that Transcends its Architecture
In the acceleration from Data to Big Data, cities have become more and more virtual. Massive databases have liquefied urban form. Virtual communication today plays freely across the material boundaries of our cities. In its most rudimentary form virtuality is within the digital transactions of numbers, interests and rents. Until a few years ago, financial investments in architectural form were equatable according to size and audience, e.g. as owner-occupied flats, as privately rented houses or as lease holding. Today capital flows freely scatter across the city at the scale of the single luxury apartment. Beyond a certain threshold in computational access, data becomes big. By computing aggregated phone signal patterns or geotagged posts, virtual cities can emerge from the traces of individuals. These hyperlocal patterns are more representative of a city than its physical twin. Until recently, architecture staged the urban through shared physical forms: the sidewalk, lane or boulevard. Adjacent to cars, walkable for pedestrians or together as citizens, each form of being urban included an ideology of a commons, and grounded with that particular parts of encountering.
In contrast, a hyper-local urban transcends lanes and sidewalks. Detached from the architecture of the city, with no belonging left, urban speculation has withdrawn into the private sphere. Today, urban value is estimated by counting private belongings only, with claustrophobic consequences. An apartment that is speculatively invested displaces residents. The housing shortage in the big cities today is not so much a problem of lack of housing, but instead of vacant space, accessible not to residents but to interests they hold in the hyper-urban. The profit from rent and use of space itself is marginal compared to the profit an embodied urban speculation adds to the property. The possibility of mapping every single home as data not only adds interest, like a pension to a home but literally turns a home into a pension. However this is not for its residents but for those with access to resources. Currently, computing Big Data expands and optimises stakeholders’ portfolios by identifying undervalued building assets. However, the notion of ‘undervalued’ is not an accurate representation of assets.
Hyper-localities increase real estate’s value in terms of how their inhabitants thrive in a neighbourhood through their encounters with one another and their surrounding architecture. The residents themselves then unknowingly produce extra value. The undervaluing of an asset is the product of its residents, and like housework, is unpaid labour. In terms of the exchange of capital, additional revenue from a property is usually paid out as a return to the shareholders who invested in its value. Putting big data-driven real estate into that equation would then mean that they would have to pay revenues to their residents. If properties create surplus value from the data generated by their residents, then property without its residents has less worth and is indeed over-, but not under-, valued.
The city uses vehicles for creating public revenue by governing the width of a street’s section or the height of a building. Architecture’s role was to provide a stage for that revenue to be created. For example the Seagram Building (van der Rohe, Johnson, 1958) created a “public” plaza by setting back its envelope in exchange for a little extra height. By limiting form, architecture could create space for not only one voice, but many voices. Today, however, the city’s new parameters hidden in the fluidity of digital traces cannot be governed by the boundaries of architecture anymore. Outlined already 40 years ago, when the personal computer became available, Gilles Deleuze forecasted that “Man is not anymore man enclosed”. At that time, and written as a “Postscript on the Societies of Control”, the fluid modulation of space prospected a desirable proposition. By liquefying enclosures, the framework of the disciplinary societies of Foucault’s writings would disappear. In modern industrial societies, Deleuze writes, enclosures were moulds for casting distinct environments, and in these vessels, individuals became masses of the mass society. For example, inside a factory, individuals were cast as workers, inside schools as students. Man without a cast and without an enclosure seemed to be freed from class and struggle. The freedom of an individual was interlinked with their transcendence from physical enclosures.
During the last forty years, the relation between a single individual and the interior framed architecture rightly aimed to dissolve the institutional forms of enclosures that represented social exclusion at their exterior. Yet, in this ambition alternative forms for the plural condition of what it means to be part of a city were not developed. Reading Deleuze further, a state without enclosures also does not put an end to history. The enclosures of control dissolve only to be replaced. Capitalism would shift to another mode of production. When industrial exchange bought raw materials and sold finished products, now it would buy the finished products and profit from the assemblies of those parts. The enclosure is then exchanged with codes that mark access to information. Individuals would not be moulded into masses but considered as individuals: accessed as data, divided into proper parts for markets, “counted by a computer that tracks each person’s position enabling universal modulation.” Forty years in, Deleuze’s postscript has become the screenplay for today’s reality.
Hyper-parts: Spatial Practices of representations
A house is no longer just a neutral space, an enclosing interior where value is created, realised and shared. A home is the product of social labour; it is itself the object of production and, consequently, the creation of surplus value. By shifting from enclosure to asset, the big data-driven economy has also replaced the project behind modernism: humanism. Architecture today is post-human. As Rosi Braidotti writes, “what constitutes capital value today is the informational power of living matter itself”. The human being as a whole is displaced from the centre of architecture. Only parts of it, such as its “immanent capacities to form surplus-value”, are parts of a larger aggregation of architecture. Beyond the human, the Hyper-city transcends the humane. A virtual city is freed from its institutions and constituent forms of governance. Economists such as Thomas Piketty describe in painstaking detail how data-driven financial flows undermine common processes of governance, whether urban, regional, or national, in both speed and scale. Their analysis shows that property transactions shelled in virtual value-creation-bonds are opaque to taxation. Transcending regulatory forms of governance, one can observe the increase of inequalities on a global scale. Comparable to the extreme wealth accumulation at the end of the nineteenth century, Piketty identifies similar neo-proprietarian conditions today, seeing the economy shifting into a new state he coins as “hypercapitalism”. From Timothy Morton’s “hyper-objects” to hypercapitalism, hyper replaces the Kantian notion of transcendence. It expresses not the absorption of objects into humanism, but its withdrawal. In contrast to transcendence, which subordinates things to man’s will, the hyper accentuates the despair of the partial worlds of parts – in the case of Morton in a given object and in the case of Piketty in a constructed ecology.
When a fully automated architecture emerged, objects oriented towards themselves, and non-human programs began to refuse the organs of the human body. Just as the proportions of a data center are no longer walkable, the human eye can no longer look out of a plus-energy window, because it tempers the house, but not its user. These moments are hyper-parts: when objects no longer transcend into the virtual but despair in physical space. More and more, with increasing computational performance, following the acronym O2O (from online to offline), virtual value machines articulate physical space. Hyper-parts place spatial requirements. A prominent example is Katerra, the unicorn start-up promising to take over building construction using full automation. In its first year of running factories, Katerra advertises that it will build 125,000 mid-rise units in the United States alone. If this occurred, Katerra would take around 30% of the mid-rise construction market in the company’s local area. Yet its building platform consists of only twelve apartment types. Katerra may see the physical homogeneity as an enormous advantage as it increases the sustainability of its projects. This choice facilitates financial speculation, as the repetition of similar flats reduces the number of factors in the valuing of apartments and allows quicker monetary exchange, freed from many variables. Sustainability refers not to any materiality but to the predictability of its investments. Variability is still desired, but oriented towards finance and not to inhabitants. Beyond the financialisation of a home, digital value machines create their own realities purely through the practice of virtual operations.
Here one encounters a new type of spatial production: the spatial practice of representations. At the beginning of what was referred to as “late capitalism”, the sociologist and philosopher Henri Lefebvre proposed three spatialities which described modes of exchange through capitalism. The first mode, a spatial practice referred to a premodern condition, which by the use of analogies interlinked objects without any forms of representation—the second, representations of space linked directly to production, the organic schemes of modernism. The third representational spaces express the conscious trade with representations, the politics of postmodernism, and their interest in virtual ideas above the pure value of production. Though not limited to three only, Lefebvre’s intention was to describe capitalism as “an indefinite multitude of spaces, each one piled upon, or perhaps contained within, the next”. Lefebvre differentiated the stages in terms of their spatial abstraction. Incrementally, virtual practices transcended from real-to-real to virtual-to-real to virtual-to-virtual. But today, decoupled from the real, a virtual economy computes physically within spatial practices of representations. Closing the loop, the real-virtual-real, or new hyper-parts, do not subordinate the physical into a virtual representation, instead, the virtual representation itself acts in physical space.
This reverses the intention of modernism orientated towards an organic architecture by representing the organic relationships of nature in geometric thought. The organicism of today’s hypercomputation projects geometric axioms at an organic resolution. What was once a representation and a geometry distant from human activity, now controls the preservation of financial predictability.
The Inequalities Between the Parts of the Virtual and the Parts of the Real
Beyond the human body, this new spatial practice of virtual parts today transcends the digital project that was limited to a sensorial interaction of space. This earlier understanding of the digital project reduced human activity to organic reflexes only, thus depriving architecture of the possibility of higher forms of reflection, thought and criticism. Often argued through links to phenomenology and Gestalt theory, the simplification of architectural form to sensual perception has little to do with phenomenology itself. Edmund Husserl, arguably the first phenomenologist, begins his work with considering the perception of objects, not as an end, but to examine the modes of human thinking. Examining the logical investigations, Husserl shows that thought can build a relation to an object only after having classified it, and therefore, partitioned it. By observing an object before considering its meaning, one classifies an object, which means identifying it as a whole. Closer observations recursively partition objects into more unaffected parts, which again can be classified as different wholes. Husserl places parts before both thought and meaning.
Derived from aesthetic observations, Husserl’s mereology was the basisof his ethics, and was therefore concluded in societal conceptions. In his later work, Husserl’s analysis is an early critique of the modern sciences. For Husserl, in their efforts to grasp the world objectively, the sciences have lost their role in enquiring into the meaning of life. In a double tragedy, the sciences also alienated human beings from the world. Husserl thus urged the sciences to recall that they ground their origins in the human condition, as for Husserl humanism was ultimately trapped in distancing itself further from reality.
One hundred years later, Husserl’s projections resonate in “speculative realism”. Coined By Levi Bryant as “strange mereology”, objects, their belongings, and inclusions are increasingly strange to us. The term “strange” stages the surprise that one is only left with speculative access. However, ten years in, speculation is not distant anymore. That which transcends does not only lurk in the physical realm. Hyper-parts figurate ordinary scales today, namely housing, and by this transcend the human(e) occupation.
Virtual and physical space are compositionally comparable. They both consist of the same number of parts, yet they do not. If physical elements belong to a whole, then they are also part of that to which their whole belongs. In less abstract terms, if a room is part of an apartment, the room is also part of the building to which the apartment belongs. Materially bound part relationships are always transitive, hierarchically nested within each other. In virtual space and the mathematical models with which computers are structured today, elements can be included within several independent entities. A room can be part of an apartment, but it can also be part of a rental contract for an embassy. A room is then also part of a house in the country in which the house is located. But as part of an embassy, the room is at the same time part of a geographically different country on an entirely different continent than the building that houses the embassy. Thus, for example, Julian Assange, rather than boarding a plane, only needed to enter a door on a street in London to land in Ecuador. Just with a little set theory, in the virtual space of law, one can override the theory of relativity with ease.
Parts are not equal. Physical parts belong to their physical wholes, whereas virtual parts can be included in physical parts but don’t necessarily belong to their wholes. Far more parts can be included in a virtual whole than parts that can belong to a real whole. When the philosopher Timothy Morton says “the whole is always less than the sum of its parts”, he reflects the cultural awareness that reality breaks due to asymmetries between the virtual and the real. A science that sets out to imitate the world is constructing its own. The distance which Husserl spoke of is not a relative distance between a strange object and its observer, but a mereological distance, when two wholes distance each other because they consist of different parts. In its effort to reconstruct the world in ever higher resolution, modernism, and in its extension the digital project, has overlooked the issue that the relationship between the virtual and the real is not a dialogue. In a play of dialectics between thought and built environment, modernism understood design as a dialogue. In extending modern thought, the digital project has sought to fulfill the promise of performance, that a safe future could be calculated and pre-simulated in a parallel, parametric space. Parametricism, and more generally what is understood as digital architecture, stands not only for algorithms, bits, and rams but for the far more fundamental belief that in a virtual space, one can rebuild reality. However, with each resolution that science seeks to mimic the world, the more parts it adds to it.
The Poiesis of a Virtual Whole
The asymmetry between physical and virtual parts is rooted in Western classicism. In early classical sciences, Aristotle divided thinking into the trinity of practical action, observational theory and designing poiesis. Since the division in Aristotle’s Nicomachean Ethics, design is a part of thought and not part of objects. Design is thus a knowledge, literally something that must first be thought. Extending this contradiction to the real object, design is not even concerned with practice, with the actions of making or using, but with the metalogic of these actions, the in-between between the actions themselves, or the art of dividing an object into a chain of steps with which it can be created. In this definition, design does not mean to anticipate activities through the properties of an object (function), nor to observe its properties (materiality), but through the art of partitioning, structuring and organising an object in such a way that it can be manufactured, reproduced and traded.
To illustrate poiesis, Aristotle made use of architecture. No other discipline exposes the poetic gap so greatly between theory, activity and making. Architecture first deals with the coordination of the construction of buildings. As the architecture historian Mario Carpo outlines in detail, revived interest in classicism and the humanistic discourse on architecture began in the Renaissance with Alberti’s treatise: a manual that defines built space, and ideas about it solely through word. Once thought and coded into words, the alphabet enabled the architect to physically distance from the building site and the built object. Architecture as a discipline then does not start with buildings, but with the first instructions written by architects used to delegate the building.
A building is then anticipated by a virtual whole that enables one to subordinate its parts. This is what we usually refer to as architecture: a set of ideas that preempt the buildings they comprehend. The role of the architect is to imagine a virtual whole drawn as a diagram, sketch, structure, model or any kind of representation that connotates the axes of symmetries and transformations necessary to derive a sufficient number of parts from it. Architectural skill is then valued by the coherence between the virtual and the real, the whole and its parts, the intention and the executed building. Today’s discourse on architecture is the surplus of an idea. You might call it the autopoiesis of architecture – or merely a virtual reality. Discourse on architecture is a commentary on the real.
From the very outset, architecture distanced itself from the building, yet also aimed to represent reality. Virtual codes were never autonomous from instruments of production. The alphabet and the technology of the printing press allowed Alberti to describe a whole ensemble distinct from a real building. Coded in writing, printing allowed for the theoretically infinite copies of an original design. Over time, the matrices of letters became the moulds of the modern production lines. However, as Mario Carpo points out, the principle remained the same. Any medium that incorporates and duplicates an original idea is more architecture than the built environment itself. Belonging to a mould, innovation in architecture research could be valued in two ways. Quantitatively, in its capacity to partition a building in increasing resolution. Qualitatively, in its capacity to represent a variety of contents with the same form. By this, architecture faced the dilemma that one would have to design a reproducible standard that could partition as many different forms as possible to build non-standard figurations.
The dilemma of the non-standard standard moulds is found in Sebastiano Serlio’s transcription of Alberti’s codes into drawings. In the first book of his treatise, Serlio introduces a descriptive geometry to reproduce any contour and shape of a given object through a sequence of rectangles. For Serlio, the skill of the architect is to simplify the given world of shapes further until rectangles become squares. The reduction finally enables the representation of physical reality in architectural space using an additive assembly of either empty or full cubes. By building a parallel space of cubes, architecture can be partitioned into a reproducible code. In Serlio’s case, architecture could be coded through a set of proportional ratios. However, from that moment on, stairs do not consist only of steps, and have to be built with invisible squares and cubes too.
Today, Serlio’s architectural cubes are rendered obsolete by 3D printed sand. By shrinking parts to the size of a particle of dust, any imaginable shape can be approximated by adding one kind of part only. 3D printing offers a non-standard standard, and with this, five hundred years of architectural development comes to an end.
Replicating: A Spatial Practice of Representations
3D printing dissolved existing partitioning parts to particles and dust. A 3D-printer can not only print any shape but can also print at any place, at any time. The development of 3D printing was mainly driven by DIY hobbyists in the Open Source area. One of the pioneering projects here is the RepRap project, initiated by Adrian Bowyer. RepRap is short for replicating rapid prototyping machine. The idea behind it is that if you can print any kind of objects, you can also print the parts of the machine itself. This breaks with the production methods of the Modern Age. Since the Renaissance, designers have crafted originals and used these to build a mould from those so that they can print as many copies as possible. This also explains the economic valuation of the original and why authorship is so vehemently protected in legal terms. Since Alberti’s renunciation of drawings for a more accurate production of his original idea through textual encoding, the value of an architectural work consisted primarily in the coherence of a representation with a building: a play of virtual and real. Consequently, an original representation that cast a building was more valued than its physical presentation. Architecture design was oriented to reduce the amount of information needed to cast. This top-down compositional thinking of original and copy becomes obsolete with the idea of replication.
Since the invention of the printing press, the framework of how things are produced has not changed significantly. However, with a book press, you can press a book, but with a book, you can’t press a book. Yet with a 3D printer, you can print a printer. A 3D printer does not print copies of an original, not even in endless variations, but replicates objects. The produced objects are not duplicates because they are not imprints that would be of lower quality. Printed objects are replicas, objects with the same, similar, or even additional characteristics as their replicator.
A 3D printer is a groundbreaking digital object because it manifests the foundational principle of the digital – replication – on the scale of architecture. The autonomy of the digital is based not only on the difference between 0 and 1 but on the differences in their sequencing. In mathematics in the 1930s, the modernist project of a formal mimicry of reality collapsed through Godel’s proof of the necessary incompleteness of all formal systems. Mathematicians then understood that perhaps far more precious knowledge could be gained if we could only learn to distance ourselves from its production. The circle of scientists around John von Neumann, who developed the basis of today’s computation, departed from one of the smallest capabilities in biology: to reproduce. Bits, as a concatenation of simple building blocks and the integrated possibility of replication, made it possible, just by sequencing links, to build first logical operations, and connecting those programs to today’s artificial networks. Artificial intelligence is artificial but it is also alive intelligence.
To this day, computerialisation, not computation is at work in architecture. By pursuing the modern project of reconstructing the world as completely as possible, the digital project computerised a projective cast in high resolution. Yet this was done without transferring the fundamental principles of interlinking and replication to the dimensions of the built space.
From Partitioning to Partaking
The printing press depends on a mould to duplicate objects. The original mould was far more expensive to manufacture than its copies, so the casting of objects had to bundle available resources. This required high investments in order to start production, leading to an increasing centralisation of resources in order to scale the mass-fabrication of standard objects for production on an assembly line. Contrarily, digital objects do not need a mould. Self-replication provided by 3D printing means that resources do not have to be centralised. In this, digital production shifts to distributed manufacturing.
Independent from any mould, digital objects as programs reproduce themselves seamlessly at zero marginal costs. As computation progresses, a copy will then have less and less value. Books, music and films fill fewer and fewer shelves because it no longer has value to own a copy when they are ubiquitously available online. And the internet does not copy; it links. Although not fully yet integrated into its current TCP-IP protocol, the basic premise of hyperlinking is that linked data adds value. Links refer to new content, further readings, etc. With a close to infinite possibility to self-reproduce, the number of objects that can be delegated and repeated becomes meaningless. What then counts is hyper-, is the difference in kind between data, programs and, eventually, building parts. In his identification of the formal foundations of computation, the mathematician Nelson Goodman pointed out that beyond a specific performance of computation, difference, and thus value, can only be generated when a new part is added to the fusion of parts. What is essential for machine intelligence is the dimensionality of its models, e.g., the number of its parts. Big data refers less to the amount of data, but more to the number of dimensions of data.
With increasing computation, architecture shifted from an aesthetic of smoothness that celebrated the mastership of an infinite number of building parts to roughness. Roughness demands to be thought (brute). The architecture historian Mario Carpo is right to frame this as nostalgic, as “digital brutalism”. Similar to brutalism that wanted to stimulate thought, digital roughness aims to extend spatial computability, the capability to extend thinking, and the architecture of a computational hyper-dimensionality. Automated intelligent machines can accomplish singular goals but are alien to common reasoning. Limited around a ratio of a reality, a dimension, a filter, or a perspective, machines obtain partial realities only. Taking them whole excludes those who are not yet included and that which can’t be divided: it is the absolute of being human(e).
A whole economy evolved from the partial particularity of automated assets ahead of the architectural discipline. It would be a mistake to understand the ‘sharing’ of the sharing economy as having something “in common”. On the contrary, computational “sharing” does not partition a common use, but enables access to multiple, complementary value systems in parallel.
Cities now behave more and more like computers. Buildings are increasingly automated. They use fewer materials and can be built in a shorter time, at lower costs. More buildings are being built than ever before, but fewer people can afford to live in them. The current housing crisis has unveiled that buildings no longer necessarily need to house humans or objects. Smart homes can optimise material, airflow, temperature or profit, but they are blind to the trivial.
It is a mistake to compute buildings as though they are repositories or enclosures, no matter how fine-grain their resolution is. The value of a building is no longer derived only from the amount of rent for a slot of space, but from its capacities to partake with. By this, the core function of a building changes from inhabitation to participation. Buildings do not anymore frame and contain: they bind, blend, bond, brace, catch, chain, chunk, clamp, clasp, cleave, clench, clinch, clutch, cohere, combine, compose, connect, embrace, fasten, federate, fix, flap, fuse, glue, grip, gum, handle, hold, hook, hug, integrate, interlace, interlock, intermingle, interweave, involve, jam, join, keep, kink, lap, lock, mat, merge, mesh, mingle, overlay, palm, perplex, shingle, stick, stitch, tangle, tie, unit, weld, wield, and wring.
In daily practice, BIM models do not highlight resolution but linkages, integration and collaboration. With further computation, distributed manufacturing, automated design, smart contracts and distributed ledgers, building parts will literally compute the Internet of Things and eventually our built environment, peer-to-peer, or better, part-to-part – via the distributive relationships between their parts. For the Internet of Things, what else should be its hubs besides buildings? Part-to-part habitats can shape values through an ecology of linkages, through a forest of participatory capacities. So, what if we can participate in the capacities of a house? What if we no longer have to place every brick, if we no longer have to delegate structures, but rather let parts follow their paths and take their own decisions, and let them participate amongst us together in architecture?
 S. Kostof, The City Assembled: The Elements of Urban Form Through History (Boston: Little, Brown and Company, 1992).
 J. Aspen, "Oslo – the triumph of zombie urbanism." Edward Robbins, ed., Shaping the city, (New York: Routledge, 2004).
 The World Bank actively promotes housing as an investment opportunity for pension funds, see: The World Bank Group, Housing finance: Investment opportunities for pension funds (Washington: The World Bank Group, 2018).
 G. M. Asaftei, S. Doshi, J. Means, S. Aditya, “Getting ahead of the market: How big data is transforming real estate”, McKinsey and Company (2018).
 G. Deleuze, “Postscript on the societies of control,” October, 59: 3–7 (1992), 6.
 Ibid, 4.
 Ibid, 6.
 R. Braidotti, Posthuman Knowledge (Medford, Mass: Polity, 2019).
 T. Piketty, Capital and Ideology (Cambridge, Mass: Harvard University Press, 2020).
 A. McAfee, E. Brynjolfsson, Machine, platform, crowd: Harnessing our digital future (New York: W.W. Norton & Company, 2017).
 H. Lefebvre, The Production of Space (Oxford: Basil Blackwell, 1991), 33.
 Ibid, 8.
 E. Husserl, Logische Untersuchungen: Zweiter Teil Untersuchungen zur Phänomenologie und Theorie der Erkenntnis.trans. "Logical investigations: Part Two Investigations into the phenomenology and theory of knowledge" (Halle an der Saale: Max Niemeyer, 1901).
 E. Husserl, Cartesianische Meditationen und Pariser Vortraege. trans. "Cartesian meditations and Parisian lectures" (Haag: Martinus Nijhoff, Husserliana edition, 1950).
 L. Bryant, The Democracy of Objects (Ann Arbor: University of Michigan Library, 2011).
 T. Morton, Being Ecological (London: Penguin Books Limited, 2018), 93.
 Aristotle, Nicomachean Ethics 14, 1139 a 5-10.
 M. Carpo, Architecture in the Age of Printing (Cambridge, Mass: MIT Press, 2001).
 M. Carpo, The Alphabet and the Algorithm (Cambridge, Mass: MIT Press, 2011).
 F. Migayrou, Architectures non standard (Editions du Centre Pompidou, Paris, 2003).
 S. Serlio, V. Hart, P. Hicks, Sebastiano Serlio on architecture (New Haven and London: Yale University Press, 1996).
 R. Jones, P. Haufe, E. Sells, I. Pejman, O. Vik, C. Palmer, A. Bowyer, “RepRap – the Replicating Rapid Prototyper,” Robotica 29, 1 (2011), 177–91.
 A. W. Burks, Von Neumann's self-reproducing automata: Technical Report (Ann Arbor: The University of Michigan, 1969).
 R. Evans, The Projective Cast: Architecture and Its Three Geometries (Cambridge, Massachusetts: MIT Press, 1995).
 N. Gershenfeld, “How to make almost anything: The digital fabrication revolution,” Foreign Affairs, 91 (2012), 43–57.
 J. Rifkin. The Zero Marginal Cost Society: The Internet of Things, the Collaborative Commons, and the Eclipse of Capitalism (New York: Palgrave Macmillan, 2014).
 B. Bratton, The Stack: On Software and Sovereignty (Cambridge, Massachusetts: MIT Press, 2016).
 J. Lanier, Who Owns the Future? (New York: Simon and Schuster, 2013).
 N. Goodman, H. S. Leonard, “The calculus of individuals and its uses,” The Journal of Symbolic Logic, 5, 2 (1940), 45–55.
 P. Domingos, The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World (London: Penguin Books, 2015).
 M. Carpo, “Rise of the Machines,” Artforum, 3 (2020).
“…the rigour of the architecture is concealed beneath the cunning arrangement of the disordered violences…”
This essay investigates the potential of codividual sympoiesis as a mode of thinking overlapping ecological concepts with economics, contemporary philosophy, advanced research in computation and digital architecture. By extending Donna Haraway’s argument of “tentacular thinking” into architecture, it lays emphasis on a self-organising and sympoietic approach to architecture. Shifting focus from an object-oriented thinking to parts, it uses mereology, the study of part-hoods and compositions, as a methodology to understand a building as being composed of parts.
It argues the limits of autopoiesis as a system and conceptualises a new architectural computing system embracing spatial codividuality and sympoiesis as a necessity for an adaptive and networked existence through continued complex interactions among its components. It propagates codividual sympoiesis as a model for continuous discrete computation and automata, relevant in the present times of distributed and shared economies.
A notion of fusing parts is established to scale up the concept and to analyse the assemblages created over a steady sympoietic computational process, guided by mereology and the discrete model. It gives rise to new conceptions of space, with a multitude of situations offered by the system at any given instant. These sympoietic inter-relations between the parts can be used to steadily produce new relations and spatial knottings, going beyond the most limiting aspect of autopoiesis, enabling it to begin to produce similar patterns of relations.
This essay extends the conceptual idea of tentacular thinking, propagated by Donna Haraway, into architecture. Tentacular thinking, as Haraway explains, is an ecological concept which is a metaphorical explanation for a nonlinear, multiple, networked existence. It elaborates on a biological idea that “we are not singular beings, but limbs in a complex, multi-species network of entwined ways of existing.” Haraway, being an ecological thinker, leads this notion of tentacular thinking to the idea of poiesis, which means the process of growth or creation and brings into discussion several ecological organisational concepts based on self-organisation and collective organisation, namely autopoiesis and sympoiesis. It propagates the notion that architecture can evolve and change within itself, be more sympoietic rather than autopoietic, and more connected and intertwined.
With the advent of distributed and participatory technologies, tentacularity offers a completely new formal thinking, one in which there is a shift from the object and towards the autonomy of parts. This shift towards part-thinking leads to a problem about how a building can be understood not as a whole, but on the basis of the inter-relationships between its composing parts. It can be understood as a mereological shift from global compositions to part-hoods and fusions triggering compositions.
A departure from the more simplified whole-oriented thinking, tentacular thinking comes about as a new perspective, as an alternative to traditional ideologies and thinking processes. In the present economic and societal context, within a decentralised, autonomous and more transparent organisational framework, stakeholders function in a form that is akin to multiple players forming a cat’s cradle, a phenomenon which could be understood as being sympoietic. With increases in direct exchange, especially with the rise of blockchain and distributed platforms such as Airbnb, Uber, etc. in architecture, such participatory concepts push for new typologies and real estate models such as co-living and co-working spaces.
Fusion of Parts: Codividuality
In considering share-abilities and cooperative interactions between parts, the notions of a fusing part and a fused part emerge, giving rise to a multitude of possibilities spatially. Fusing parts fuse together to form a fused part which, at the same stage, behaves as another fusing part to perform more fusions with other fusing parts to form larger fused parts. The overlaps and the various assemblages of these parts gain relevance here, and this is what codividuality is all about.
As Haraway explains, it begins to matter “what relations relate relations.” Codividual comes about as a spatial condition that offers cooperative, co-living, co-working, co-existing living conditions. In the mereological sense, codividuality is about how fusing parts can combine to form a fused part, which in turn, can combine to form a larger fused part and so on. Conceptually, it can be understood that codividuality looks into an alternative method for the forming and fusing of spatial parts, thereby evolving a fusion of collectivist and individualist ideologies. It evolves as a form of architecture that is created from the interactions and fusion of different types of spaces to create a more connected and integrated environment. It offers the opportunity to develop new computing systems within architecture, allowing architectural systems to organise with automaton logic and behave as a sympoietic system. It calls for a rethinking of automata and computation.
Codividual can be perceived as a spatial condition allowing for spatial connectivities and, in the mereological sense, as a part composed of parts; a part and its parts. What is crucial is the nature of the organisation of these parts. An understanding of the meaning and history of the organisational concepts of autopoiesis and sympoiesis brings out this nature.
Autopoiesis: Towards Assemblages of Parts
The concept of autopoiesis stems from biology. A neologism introduced by Humberto Maturana and Francisco Varela in 1980, autopoiesis highlights the self-producing nature of living systems. Maturana and Varela defined an autopoietic system as one that “continuously generates and specifies its own organisation through its operation as a system of production of its own components.” A union of the Greek terms – autos, meaning “self” and, poiesis, meaning “organisation” – autopoiesis came about as an answer to questions cropping up in the biological sciences pertaining to the organisation of living organisms. Autopoiesis was an attempt to resolve the confusion between biological processes that depend on history such as evolution and ontogenesis, in contrast with those that are independent of history, like individual organisation. It questioned the organisations of living systems which made them a whole.
Varela et al pointed out autonomy as the characteristic phenomenon arising from an autopoietic organisation; one that is a product of a recursive operation. They described an autopoietic organisation as a unity; as a system, with an inherently invariant organisation. Autopoietic organisation can be understood as a circular organisation; as a system that is self-referential and closed. Jerome McGann, on the basis of his interpretation of Varela et al, described an autopoietic system as a “closed topological space, continuously generating and specifying its own organisation through its operation as a system of production of its own components, doing it in an endless turnover of components”.
What must be noted here is that the computational concept of self-reproducing automata is classically based on an understanding of a cell and its relation to the environment. This is akin to the conceptual premise of autopoiesis, which is the recursive interaction between the structure and its environment, thus forming the system. It must be noted that both the concepts start with a biological understanding of systems and then extend the concept. A direct link can be observed between the works of von Neumann, and Maturana and Varela. Automata, therefore, can be seen as an autopoietic system.
The sociologist, Niklas Luhmann, took forward this concept into the domain of social systems. His theoretical basis for the social systems theory is that all social events depend on systems of communication. On delving into the history of social or societal differentiation, Luhmann observes that the organisation of societies is based on functional differentiation. A “functionally differentiated society”, as he explains, comprises varying parallel functional systems that co-evolve as autonomous discourses. He discovers that each of these systems, through their own specific medium, evolve over time, following what Luhmann calls “self-descriptions”, bringing out a sense of autonomy in that respective system.
Following Maturana and Varela’s explanation, an autopoietic organisation may be viewed as a composite unity, where internal interactions form the boundary through preferential neighbourhood interactions, and not external forces. It is this attribute of self-referential closure that Luhmann adopts in his framework. This closure maintains the social systems within and against an environment, culminating in order out of chaos.
The Limits of Autopoietic Thinking
Beth Dempster, as a contradiction to Maturana and Varela’s proposition of autopoiesis, proposed a new concept for self-organising systems. She argues that heuristics based on the analogy of living systems are often incongruous and lead to misleading interpretations of complex systems. Besides, autopoietic systems tend to be homeostatic and are development oriented in their nature. Being self-producing autonomous units “with self-defined spatial or temporal boundaries”, autopoietic systems show a centralised control system and are consequently efficient. At the same time, such systems tend to develop patterns and become foreseeable. It is this development-oriented, predictable and bounded nature of autopoietic systems that poses a problem when such systems are scaled up.
Autopoietic systems follow a dynamic process that allows them to continually reproduce a similar pattern of relations between their components. This is also true for the case of automata. Moreover, autopoietic systems produce their own boundaries. This is the most limiting aspect of these concepts.
Autopoietic systems do not instigate the autonomy of parts, as they evolve on a prescribed logic. Instead, a more interesting proposition is one in which the interacting parts instigate a kind of feedback mechanism within the parts, leading to a response that triggers another feedback mechanism, and so on. Mario Carpo’s argument that in the domain of the digital, every consumer can be a producer, and that the state of permanent interactive variability offers endless possibilities for aggregating the judgement of many, becomes relevant at this juncture. What holds true in the context of autopoiesis is Carpo’s argument that fluctuations decrease only at an infinitely large scale, when the relations converge ideally into one design.
In the sympoietic context, however, this state of permanent interactive variability Carpo describes is an offer of the digital to incorporate endless externalised inputs. The need for sympoiesis comes in here. Sympoiesis maintains a form of equilibrium or moderation all along, but also, at the same time, remains open to change. The permanent interactive variability not only offers a multitude of situations but also remains flexible.
The limits to autopoietic thinking is what forms the basis for Dempster’s argument. In contradistinction to autopoiesis, she proposes a new concept that theorises on an “interpretation of ecosystems”, which she calls sympoietic systems. Literally, sympoiesis means “collective creation or organisation”. A neologism introduced by Dempster, the term, sympoiesis, explains the nature of living systems. Etymologically, it stems out from the Ancient Greek terms “σύν (sún, “together” or “collective”)” and “ποίησις (poíesis, “creation, production”)”. As Dempster explains, these are “collectively producing systems, boundaryless systems.”
Sympoietic systems are boundary-less systems set apart from the autopoietic by “collective, amorphous qualities”. Sympoietic systems do not follow a linear trajectory and do not have any particular state. They are homeorhetic, i.e., these systems are dynamical systems which return to a trajectory and not to a particular state. Such systems are evolution-oriented in nature and have the potential for surprising change. As a result of the dynamic and complex interactions among components, these systems are capable of self-organisation. Sympoietic systems, as Donna Haraway points out, decentralise control and information”, which gets distributed over the components.
Sympoiesis can be understood simply as an act of “making-with”. The notion of sympoiesis gains importance in the context of ecological thinking. Donna Haraway points out that nothing or no system can reproduce or make itself, and therefore, nothing is really absolutely autopoietic or self-organising. Sympoiesis reflects the notion of “complex, dynamic, responsive, situated, historical systems.” As Haraway explains, “sympoesis enlarges and displaces autopoesis and all other self-forming and self-sustaining system fantasies.”
Haraway describes sympoietic arrangements as “ecological assemblages”. In the purview of architecture, sympoiesis brings out a notion of an assemblage that could be understood as an architectural assemblage growing over sympoietic arrangements. Though sympoiesis is an ecological concept, what begins to work in the context of architecture is that the parts don’t have to be strict and they aim to think plenty; they also have ethics and synergies among each other. In sympoietic systems, components strive to create synergies amongst them through a cooperation and a feedback mechanism. It is the linkages between the components that take centre stage in a sympoietic system, and not the boundaries. Extrapolating the notion of sympoiesis into the realm of architecture, these assemblages can be conceived in Haraway’s words as “poly-spatial knottings”, held together “contingently and dynamically” in “complex patternings”. What become critical are the intersections or overlaps or the areas of contact between the parts.
Sympoietic systems strategically occupy a niche between allopoiesis and autopoiesis, the two concepts proposed by Maturana and Varela. The three systems are differentiated by various degrees of organisational closure. Maturana and Varela elaborate on a binary notion of organisationally open and closed systems. Sympoiesis, as Dempster explains steps in as a system that depends on external sources, but at the same time it limits these inputs in a “self-determined manner”. It is neither closed nor open; it is “organisationally ajar”. However, these systems must be understood as only idealised sketches of particular scenarios. No system in reality must be expected to strictly adhere to these descriptions but rather lie on a continuum with the two idealised situations as its extremes.
It is this argument that is critical. In the context of architecture and urban design, what potentially fits is a hybrid model that lies on the continuum of autopoiesis and sympoiesis. While autopoiesis can guide the arrangement or growth of the system at the macro level, sympoiesis must and should step in in order to trigger a feedback or a circular mechanism within the system to respond to externalities. What can be envisaged is therefore a system wherein the autopoietic power of a system constantly attempts to optimise the system towards forming a boundary, and simultaneously the sympoietic power of the system attempts to trigger the system for a more networked, decentralised growth and existence, and therefore, creates a situation where both the powers attempt to push the system towards an equilibrium.
Towards Poly-Spatial Knottings
In sympoiesis, parts do not precede parts. There is nothing like an initial situation or a final situation. Parts begin to make each other through “semiotic material involution out of the beings of previous such entanglements” or fused situations. In order to define codividuality and to identify differences, an understanding of classifying precedents is important. The first move is a simple shift from an object-oriented thinking to a parts-oriented thinking. Buildings are classified as having a dividual, individual and codividual character from the point of view of structure, navigation and program.
Codividual is a spatial condition that promotes shared spatial connections, internally or externally, essentially portraying parts composed of parts, which behave as one fused part or multiple fused parts. The fused situations fulfil the condition for codividuality as the groupings form a new inseparable part – one that is no longer understood as two parts, but as one part, which is open to fuse with another part.
Delving into architectural history, one can see very few attempts in the past by architects and urban designers towards spatial integration by sympoietic means. However a sympoietic drive can be seen in the works of the urban planner Sir Patrick Geddes. He was against the grid-iron plan for cities and practised an approach of “conservative surgery” which involved a detailed understanding of the existing physical, social and symbolic landscapes of a site. For instance, in the plan for the city of Tel Aviv in Israel (1925–1929), Geddes stitches together the various nodes of the existing town akin to assemblages to form urban situations like boulevards, thereby activating those nodes and the connecting paths.
Fumihiko Maki and Masato Oktaka also identify three broad collective forms, namely, compositional form, megastructures and group forms. Maki underscores the importance of linkages and emphasises the need for making “comprehensible links” between discrete elements in urban design. He further explains that the urban is made from a combination of discrete forms and articulated large forms and is therefore, a collective form and “linking and disclosing linkage (articulation of the large entity)” are of primary importance in the making of the collective form. He classifies these linkages into operational categories on the basis of their performance between the interacting parts.
Building upon Maki’s and Ohtaka’s theory of “collective form”, it is useful to appreciate that the architecture of a building can be thought of as a separate entity, and consequently there is an “inadequacy of spatial language to make meaningful urban environment.” Sympoiesis comes out through this notion of understanding the urban environment as an interactive fabric between the building and the context. Maki and Ohtaka also make an important comment that the evolution of architectural theory has been restricted to the building and describe collective forms as a concept which goes beyond the building. Collective forms can have a sympoietic or an autopoietic nature, which is determined by the organisational principles of the collective form. Sympoietic collective forms not only can go beyond the building, but also weave a fabric of interaction with the context. Although a number of modern cases of collective forms exist, most of the traditional examples of collective forms, however, have evolved into collective forms over time, albeit unintentionally.
The Corridor by Giorgio Vasari
An important case of an early endeavour in designing a collective form at an urban scale is Corridoio Vasariano by Giorgio Vasari in Florence, built in the year 1564. It can be understood as a spatial continuum that connects through the numerous important buildings or nodes within the city through a built corridor, resulting in a collective form. According to Michael Dennis, Vasari’s Corridor, in its absolute sense, is a Renaissance “insert” into the “fundamentally medieval fabric of central Florence”. As Dennis writes in The Uffizi: Museum as Urban Design (1980),
“…Each building has its own identity and internal logic but is also simultaneously a fragment of a larger urban organisation; thus each is both complete and incomplete. And though a
given building may be a type, it is always deformed, never a pure type. Neither pure object nor pure texture, it has characteristics of both – an ambiguous building that was, and still is, multifunctional…”
Dennis’s description for the design of the Vasari’s Corridor brings out the notion of spatial fusion of buildings as parts. The Corridor succeeds as an urban insert and this is primarily for two reasons. At first, it maintains the existing conditions and is successful in acclimatising to the context it is placed in. Secondly, it simultaneously functions on several varying scales, from that of the individual using the Corridor to the larger scale of the fabric through which it passes. The Vasari’s Corridor is a sympoietic urban fusion – one that is a culmination of the effect of local conditions.
Stan Allen, in contrast to compositions, presents a completely inverted concept for urban agglomerations. His concept of field configurations reflects a bottom-up phenomena. In his view, the design must necessarily reflect the “complex and dynamic behaviours of architecture’s users”. Through sympoiesis, the internal interaction of parts becomes decisive and they become relevant as they become the design drivers and the overall formation remains fluid and a result of the interactions between the internal parts.
Towards a Sympoietic Architecture
Another important aspect that forms a basis for the sympoietic argument is the relevance of information in systems. While Maturana and Varela explain that information must be irrelevant to self-producing systems since it is an extrinsically defined quantity, Dempster lays great emphasis on the relevance of information in sympoietic systems. Her explanation on the relevance of information is that it potentially carries a message or a meaning for a recipient. Information, therefore, is dependent on the context and recipient, but Stafford Beer hints that it is also “observer dependent”.
In the architectural domain, it signifies that information or external data input holds no relevance in an autopoietic system. The system grows purely on the basis of the encoded logic and part-to-part organisational relations, and is unrestricted and free from any possible input. However, information or data in the sympoietic paradigm gains relevance as it activates the system as a continuous flux of information guiding its organisation. This relates to the concepts of reinforced machine learning, wherein the system learns by heuristics to evolve by adapting to changing conditions, and by also producing new ones, albeit it comes with an inherent bias.
The Economic Offer of the Codividual
From an economic lens, the concept of sympoiesis does not exist at the moment. However, with the rise in participatory processes within the economy and the advent of blockchain, it shows immense potential in architecture. Elinor Ostrom’s work on the role of commons in decision-making influences the work of David Rozas, who researches on a model of blockchain-based commons governance. He envisages a system which is decentralised, autonomous, distributed and transparent, a more democratic system where each individual plays his/her own role. This idea is about bringing a more sympoietic kind of drive to blockchain. Sympoietic systems are based on a model that is akin to a commons-oriented or a blockchain-based economy that functions like a cat’s cradle with its multiple stakeholders being interdependent on each other. And as Jose Sanchez points out, it is the power of the discrete, interdependent system that makes this architecture possible. According to him, it offers a “participatory framework for collective production”.
The fusion of parts leads to the creation of parts such that the sum of the parts becomes greater than the whole. A codividual sympoietic model can potentially resolve the housing crisis since it flips the economic model to a bottom-up approach. With tokenisation, autonomous automatisation, decentralisation of power and transparency, this blockchain-based codividual model can compete with traditional real estate models, thereby resulting in more equitable and fair-minded forms of housing. As Lohry and Bodell point out, such models can reduce personal risk and also make livelihoods more economical and “community-oriented”.
The ecological framework of the concept of poiesis, as already outlined, is based on the growth from the organisation of elements. In the context of autopoiesis and sympoiesis, it can be observed that “part-to-part” and even “part-to-whole” conditions gain significant relevance in these concepts. An appreciation of these conditions, therefore, becomes relevant to understand these kinds of notions. The idea of components, as described by Dempster and Haraway in the purview of sympoiesis, and Jerome McGann in the autopoietic context, could be extended to architecture in the form of part-thinking.
However, a mereological approach begins with existing entities or “sympoietic interactions” and proceeds further with a description of their clusters, groupings and collectives. Through codividual sympoiesis, the whole gets distributed all over the parts. In this system, the discreteness of parts is never just discrete. It goes beyond the participating entities and the environment. In line with Daniel Koehler’s argument, the autonomy of the part ceases to be defined just as a self-contained object. It goes beyond it and begins to be defined “around a ratio of a reality, a point of view, a filter or a perspective”.
Sympoiesis evolves out of competitive or cooperative interactions of parts. As in ecology, these parts play symbionts to each other, in diverse kinds of relationalities and with varying degrees of openness to attachments and assemblages with other fusing parts depending on the number of embedded brains and the potential connectors. Traditionally, architecture is parasitic. When the aesthetic or the overall form drives the architecture, architectural elements act as a host for other architectural elements to attach to depending on composition. In sympoiesis, there is no host and no parasite. It inverts the ideology of modernism, beginning with not a composition but actually evolving a composition of “webbed patterns of situated and dynamic dilemmas” over symbiotic interaction. Furthermore, increasingly complex levels of quasi-individuality of parts come out of this process of codividual sympoiesis. It gives an outlook of a collective and still retains the identity of the individual. It can simply be called multi-species architecture or becoming-with architecture.
Talking of transdisciplinary ecologies and architecture, we can foresee string figures tying together human and nonhuman ecologies, architecture, technologies, sustainability, and more. This also gives rise to a notion of ecological fusion of spatial conditions such as daylight and ventilation, in addition to physical fusion of parts. Codividual sympoiesis, thus, even shows potential for a nested codividual situation, in that the parts sympoietically fuse over different spatial functions.
Going over sympoiesis and mereology, it makes sense to look for parts which fuse to evolve fused parts; to look for architecture through which architecture is evolved; to look for a codividuality with which another codividuality is evolved. From a mereological point of view, in a system in which the external condition overlaps with an internal part in the search for another component, to give rise to a new spatial condition over the fusion of parts could be understood as codividual sympoiesis. Codividual sympoiesis is therefore about computing a polyphony, and not orchestrating a cacophony.
 M. Foucault, Madness and Civilization (New York: Random House US, 1980).
 D. Haraway, Staying with the Trouble: Making Kin in the Chthulucene (Durham: Duke University Press, 2016), 30–57.
 Ibid, 35.
 H. R. Maturana and F. G. Varela, Autopoiesis And Cognition (Dordrecht, Holland: D. Reidel Pub. Co., 1980).
 H. R. Maturana, F. G. Varela, and R. Uribe, "Autopoiesis: The Organization Of Living Systems, Its Characterization And A Model," Biosystems, 5, 4, (1974), 187–196.
 J. McGann, A New Republic of Letters (Cambridge, Massaschusetts: Harvard University Press, 2014).
 A. W. Burks, Von Neumann's Self-Reproducing Automata; Technical Report (Ann Arbor: The University of Michigan, 1969).
 N. Luhmann, Art as a Social System (Stanford: Stanford University Press, 2000), 232.
 B. Dempster, Sympoietic and Autopoietic Systems : A new distinction for self-organizing systems (Waterloo: School of Planning, University of Waterloo, 1998).
 Ibid, 9.
 M. Carpo, The Second Digital Turn: Design Beyond Intelligence (Cambridge, Massachusetts: MIT Press, 2017), 131–44.
 Ibid, 12.
 B. Dempster, Sympoietic and Autopoietic Systems : A new distinction for self-organizing systems (Waterloo: School of Planning, University of Waterloo, 1998).
 D. Haraway, Staying with the Trouble: Making Kin in the Chthulucene (Durham: Duke University Press, 2016), 33.
 Ibid, 5.
 Ibid, 125.
 Ibid, 58.
 Ibid, 60.
 B. Dempster, Sympoietic and Autopoietic Systems : A new distinction for self-organizing systems (Waterloo: School of Planning, University of Waterloo, 1998).
 D. Haraway, Staying with the Trouble: Making Kin in the Chthulucene (Durham: Duke University Press, 2016), 60.
 F. Maki, and M. Ohtaka, Investigations in Collective Form (St. Louis: School of Architecture, Washington University, 1964), 3-17.
 M. Dennis, "The Uffizi: Museum As Urban Design", Perspecta, 16, 62 (1980), 72.
 Ibid, 63.
 S. Allen, "From Object to Field,” Architectural Design, AD 67, 5-6 (1997), 24–31.
 S. Beer, “Preface,” Autopoiesis: The Organization of the Living, auth. H. R. Maturana and F. Varela (Dordrecht, Holland: D. Reidel Publishing Company, 1980).
 D. Rozas, “When Ostrom Meets Blockchain: Exploring the Potentials of Blockchain for Commons Governance” (2019), https://davidrozas.cc/presentations/when-ostrom-meets-blockchain-exploring-potentials-blockchain-commons-governance-1, last accessed 3 May 2019.
 J. Sánchez, “Architecture for the Commons: Participatory Systems in the Age of Platforms,” Architectural Design, 89, 2 (2019), 22–29.
 M. Lohry and B. Bodell, "Blockchain Enabled Co-Housing" (2015), https://medium.com/@MatthewLohry/blockchain-enabled-co-housing-de48e4f2b441, last accessed 3 May 2019.
 D. Koehler, “Mereological Thinking: Figuring Realities within Urban Form,” Architectural Design, 89, 2 (2019), 30–37.
Mereology is a formal concept which enters architecture as an additional formal category. Form is a rather ambiguous concept in architecture. So in this essay, first an investigation is conducted by contrasting two closely related concepts: shape and content.
Hans Trusack criticises the problem of shape for its shallow formalism and historical-theoretical indifference as a defensive strategy that evades the disciplines and difficulties of past and future. The distinction between the terms “form” and “shape”, following Tursack’s argument, is a “matter of generative process”. Both terms point to the production of visual expression. Yet while shape refers to the appearance of an object, form reflects the logic of transformation and operation within historical and theoretical contexts such as political and religious ideology, economics and technological background. Tursack criticised the strategy of shape in architecture, stating its lack of reference, it being “plainly, and painfully, evident”, and incapable of moving forward. Whereas form is difficult, disciplinary and requires historical and theoretical study, and yet promises the future.
Form has the advantage of being able to deal with complex relations due to its deep and continuously evolving intervention with content. The term form derives from the Latin word forma, is understood as the combination of two Greek words: eidos, the conceptual form, and morphe, the physical form. The complexity of form can be attributed to these differentiated meanings, yet complexity is compatible with agencies and relations. This can emerge further by conducting a brief historical review.
Ancient Greek architecture pursues the ideality in mathematics and proportion. The efforts made by architects in designing the Parthenon provides evidence of this feature. These operations tried to approximate the physical shape of architecture to the “ideal” form. Form reflects the pursuit of ideality and perfection in this period.
For Gothic architecture, there were more concerns about structure, and matter was pushed to its maximum capability to build as tall as possible for religious appeal. Consequently, structures were designed to be rigid and lightweight, and solid walls were replaced by glass windows, while flying buttresses supported the main structure to grow even taller. Consequently, astonishing space and fascinating transparency emerged.
Modernism claims that “form follows function”, rejecting traditional architecture styles. The reality of matter and the logic of technology eschewed decorations, proportions, or any subjective distortion of matter. The emphasis on the term “function” illustrates an ideology of treating architecture as a machine. Each part is nothing more than a component that has a certain feature inside this machine, and redundant decorations and details are removed to deliver this idea clearly. Without distractions, space becomes evident.
In the shift to postmodernism, the uniformity and the lack of variety of modernist architectures were reacted against, and a great variety of approaches emerged to overcome the shortcomings of modernism. Parametricism, for instance, has been promoted by the thriving of digital technologies. Designers are capable of more complex formal production, and architectural elements have become variables that can be interdependently manipulated. In this formalism, rigidity, isolation, and separation are opposed, while softness, malleability, differentiation and continuity are praised.
From the examples above, form is the embodiment of the relations between architecture and its motive in specific historical scenarios, while for shape, only the results are accounted for – relations are ignored, and architecture is treated as isolated physical entities, incapable of producing new relations. Different methodologies of dealing with architectural form also imply the variation of ideology in compiling form with content.
Mereology – An Approach of Architectural Form
In recent philosophical texts, a third notion of form is brought forward. Contrary to a dialectic of form and content, here investigations deal with the resonance of parts: the description of objects by their ontological entanglement only. The writings of the philosopher Tristan Garcia are a strong example for such mereological considerations. In his treatise Form and Object: A Treatise on Things (2014), Garcia investigates the ontology of objects with two initial questions, “… what is everything compose of? … what do all things compose?” The first question interrogates the internal, the elementary component of everything. The second interrogates the external, the totality of everything. For Garcia, the form of a thing is “the absence of the thing, its opposite, its very condition,” form has two senses, the “beginning”, and the “end”, which never ends. Form begins when a thing ends, it begins with different forms; in the end, since it has “endless end”, form ultimately merges into one, which is “the world”. Garcia defines an object as “a thing limited by other things and conditioned by one or several things.” The form of an object depends on what comprehends or limits this object. Every object is “embedded in a membership relation with one or several things”, they can be divided by defining limits, which is also a thing distinguishing one thing from another. Garcia’s argument adapts the concept of mereology. Form has two extremes, one toward the fundamental element of matter, and the other toward the world, comprehending everything. All things can always be divided into an infinite number of parts, and they can always be parts of another thing. Identifying parts or wholes within a section we can operate on can establish a limit. The relevance between form and mereology opens a new opportunity to inspect architectural form from a different point of view.
One of the first discussions about parts and wholes in modern philosophy was posed by Edmund Husserl, in Logical Investigation (1st ed. 1900-1901, 2nd ed, 1913), but the term “mereology” has not been put forward until Stanisław Leśniewski used it in 1927 from the Greek work méros (parts). Mereology is considered as an alternative to set theory. A crucial distinction lies between mereology and set theory in that set theory concerns the relations between a class and its elements, while mereology describes the relations between entities. The mathematical axioms of mereology will be used as the fundamental theory of developing the method of analysing architectural form.
Following Roberto Casati and Achim Varzi, the four fundamental mathematical formularisations of mereology are: “Relations are reflexive, antisymmetric and transitive. (…) First, everything is part of itself. Second, two different objects cannot be part of each other. Third, each part of a part of a whole is also part of that whole. Fourth, an object can be a part of another object, if both exist.”
Mereology can be a promising approach also for the reading of architectural form, as it emphasises relationships without reducing buildings to their appearance or function. However, such philosophical descriptions consider wholes and parts as mostly abstract figures. Therefore, a supplement could be developed to properly categorise the mereological relations in the field of architecture. Having the relations between form and mereology addressed, methodologies can be developed to access the analysis of architectural form. Mereology as a specific methodology for architecture is quite new. One of the first introductions can be found in Daniel Koehler’s book The Mereological City: A Reading of the Works of Ludwig Hilberseimer (2016). Here, Koehler departs from the modern city, exemplified through the work of Ludwig Hilberseimer to illustrate mereological relations in the modernist city. From the room to the house to the city to the region, Hilberseimer canonically drew the city as a hierarchical, nested stack of cellular spaces. However, through the close reading of its mereological relations it becomes clear that political, economic or social conditions are entangled in a circular composition between the parts of the city. Recalling Garcia’s discourse, and resonating with Leon Battista Alberti’s thesis, Koehler shows that the cells in Hilberseimer’s modernist city are interlocked. A house becomes the whole for rooms; a city becomes the whole for houses. By considering the city and its individual buildings equally, “the whole is a part for the part as a whole.”
Architectural Relations Between Parts and Wholes
Parts are not only grouped, packed and nested through different scales, but also in different relations. Specific relationships have been developed in different architectural epochs and styles. Mathematically, four general classes of relations can be drawn: whole-to-whole, part-to-part, whole-to-parts and parts-to-whole, while more specific subclasses can be discovered from each.
According to the mathematical definition, between wholes there exist complex relations, the whole could exist on any mereological level, and the complexity of relations between multiple levels are also accounted for. Whole-to-whole relations can become complex when considering multi-layer interaction, and more relations can be identified: juxtapose, overlap, contain, undercrossing, transitivity, partition, trans-boundary, intact juxtapose, compromised juxtapose.
A first glance of New York, gives the impression that it is quite heterogeneous, but underneath there is a city grid underlying the heterogeneity, and while the relations displayed in the grid are rather simple, all wholes juxtapose with one another. In comparison, in Siena, an Italian city, the urban space is quite complex, where boundaries of all wholes negotiate with others, the gaps in between are carefully treated, the nesting relations are extremely rich, and multiple relations from the diagram above can be found.
The whole-to-parts relation studies what the whole does to its part, namely in terms of top-down rules. The mathematical definition does not involve specific situations that a whole-part condition holds. Distinctions within individual contexts make a significant difference in clarifying an explicit relation. The situations for the whole can generally be classified into following types: fuse, fit and combine.
One of Zaha Hadid’s projects, Heydar Aliyev Centre, indicates the fusing relation. Architecture is represented as a smooth, fluid volume. The distinction between elements disappears, and this dominating power even extends to the external landscape. In order to maintain a continuous whole, parts are fabricated into a particular shape, having their unique unchangeable locations. The continuous whole excessively overwhelms the parts, yet not all parts are reshaped to fuse into the whole, and because the parts are small enough in relationship to the whole, the control from the whole is weakened, and parts are fit into the whole.
The third type is combining. An example for this relation is Palladio’s project Villa Rotonda. In this case, parts are obvious. The whole is a composition of the parts’ identities. However, the whole also holds a strong framework, in a rigorous geometric rule that decides positions and characters of parts. The arrangement of parts is the embodiment of this framework.
The parts-to-whole relation studies what the parts do to the whole, or the power of bottom-up relationships. The different situations of parts are also key parameters in validating a given relation. The classification of situations for parts are as follows: frame, intrinsic frame, extrinsic frame, bounded alliance, unbounded alliance.
Emil Kaufmann thoroughly investigated the innovative works by Claude Nicholas Ledoux in Three Revolutionary Architects: Boullee, Ledoux and Lequeu (1952). According to Kaufmann’s study, Ledoux’s works developed new compositional relations of elements from the Baroque. The characteristics of parts in Baroque architecture are rich, but tend to regulate the identities of all the elementary parts and fuse them together to serve the harmony of the whole, presenting the intrinsic framing. Ledoux’s work is an extrinsic framing, where the parts are relatively independent, with each element maintaining its own properties, and while consisting of the whole, they can be replaced with other identical components.
One of my projects in discrete aggregation of elements presents an unbounded alliance relation. The aggregation as a whole shows a form that is discretised (Figure 12), and does not pass any top-down instructions to its parts.
Part-to-Part Without Whole – The Ultimate Parts
For part-to-part relations, local interactions are emphasised, and interactions occur at multiple levels of compositions, where the part-to-part relations in some cases are similar to that between wholes. It has following classifications: juxtapose, interrelate, contain, partition, overlap, trans-juxtapose, over-juxtapose, over-partition, over-overlap.
Architects have been working on the possibility of removing the whole by studying the part-to-part relations. Several approaches have been developed, mainly through computation. Neil Leach considers the city as a “swarm intelligence”, bringing forward the potential of developing urban form with computational method. Leach encourages swarm intelligence for the interactions between agents (parts), which “offers behavioral translations of topology and geometry”, while fractals, L-systems or cellular automata are all constrained by some limitation. However, although swarm intelligence is based on the interaction of individual agents, it is always treated as a whole; all cells of CA are fixed in the background grid, which is also a whole. For fractals and L-systems, they can be subdivided into infinite parts, a transcendent whole where all parts grown from still exist. In the mereological sense, none of these cases can escape the shadow of the whole – strictly speaking, they are part-to-whole relations. To discuss the part-to-part relation in more depth, more investigation is needed to clarify the concept of part.
In The Democracy of Objects (2011), Levi Bryant claims that objects constitute a larger object by establishing relations with others, but this doesn’t alter the existence of objects, as he says, “all objects equally exist, but not all objects exist equally.” In Bryant’s discourse, this independence suggests the dissolution of the whole. Bryant proposes a concept of “regimes of attraction”, that includes the “endo-relation” and the “exo-relation”. The endo-relation indicates that the proper being of an object consists of its powers or what an object can do”, not the “qualities” emerging within an exo-relation. An object possesses “volcanic powers”, the stabilisation of the regime of attraction actualises it into a specific state. The concept of the whole reduces objects to this state, which displays only a section of their proper beings. The concept of regimes of attraction is against this reduction.
The regime of attraction can be linked to the notion of “assemblage” from Manuel DeLanda, however, there is a distinction between the two. Assemblage holds only the relation of exteriority, whereas regime of attraction maintains both relations of interiority and exteriority. In Assemblage Theory (2016), DeLanda reassembled the concept “assemblage”, which was originated from the French agencement. Created by Gilles Deleuze and Félix Guattari, this original term refers to the following meanings: the “action of matching or fitting together a set of components” – the process, and the “result of such an action” – the product.
DeLanda emphasised two aspects, heterogeneity and relations. As he indicated, the “contrast between filiations and alliances” can be described in other words as intrinsic and extrinsic relations.
The nature of these relations has different influences on the components. The intrinsic relation tends to define the identities of all the parts and fix them into exact location, while the extrinsic relation connects the parts in exteriority – without interfering with their identities. DeLanda summarised four characteristics of assemblage: 1) individuality, an assemblage is an individual entity, despite different scale or different number of components; 2) heterogeneity, components of an assemblage are always heterogeneous; 3) composable, assemblages can be composed into another assemblage; 4) bilateral-interactivity, an assemblage emerges from parts interactions, it also passes influences on parts.
DeLanda then moved on to the two parameters of assemblage. The first parameter is directed toward the whole, the “degree of territorialisation and deterritorialisation”, meaning how much the whole “homogenises” its component parts. The second parameter is directed toward the parts, the “degree of coding and decoding”, meaning how much the identities of parts are fixed by the rules of the whole. The concept of assemblage provides us a new lens of investigating these mereological relations. With this model, the heterogeneities and particularity of parts are fully respected. The wholes become immanent, individual entities, existing “alongside the parts in the same ontological plane”, while parts in a whole are included in the whole but not belonging to it, and according to Bryant’s discourse, the absence of belonging dispelled the existence of the whole.
From the study of regime of attraction and assemblage, this essay proposes a new concept – “the ultimate parts” – in which a proper “part-to-part without whole” is embedded. A part (P) horizontally interacts with its neighbouring parts (Pn), with parts of neighbouring parts (Pnp), as well as interacting downwardly with parts that compose it (Pp) and upwardly with wholes it is constituting which are also parts (Pw). This concept significantly increases the initiatives of parts and decreases the limitations and reductions of them. It doesn’t deny the utilities of the whole, but considers the whole as another independent entity, another part. It’s neither top-down, nor bottom-up, but projects all relations from a hierarchical structure to a comprehensive flattened structure. The ultimate parts concept provides a new perspective for observing relations between objects from a higher dimension.
One application of this concept is TARSS (Tensegrity Adaptive Robotic Structure System), my research project in MArch Architectural Design in B-Pro at The Bartlett School of Architecture in 2017–2018. This project utilises the features of tensegrity structures of rigidity, flexibility and lightweight. The difference is that rather than fixing parts into a static posture and eliminating their movements, the project contrarily tries to increase the freedom of parts as much as possible. The tensile elements have the ability to adjust their lengths collaboratively to change the general shape of the aggregation. Reinforcement learning is employed to empower the parts with objective awareness. The training sessions were set up toward multiple objectives that are related to architectural concerns, including pathfinding, transformation, balance-keeping, self-assembling and structural load distributing. This approach brings obvious benefits, as architecture design in this sense is not only about an eventual result, but about the dynamic process of constantly responding to the environmental, spatial or functional requirements. The premise is to treat parts as ultimate parts whilst retaining their objectivity and being able to actively interact at all mereological levels without limitations.
The concept of ultimate parts brings forward a new relation of “part-to-part without whole”. This new relation belongs to a higher dimension. The details and essence of objects are simultaneously displayed, without being obscured by the compositional structure. Analogised with spatial dimensions, a 3-dimensional cube simultaneously shows all its faces and interior in 4-dimensional space. The significance is that it opens vast new perspectives and operational methodologies in the architectural design realm. Especially with the advancement in robotics and artificial intelligence, this type of new relationship enables greater opportunities by regarding machines as characters with immense potential to work with us, instead of for us. The role of designers would be very much like “breeders of virtual forms”, who do not rule the form, but guide it towards the demands. This moves away from anthropocentric design by overcoming part-to-whole with part-to-part.
 H. Tursack, "The Problem With Shape", Log 41 (New York: Anyone Corporation, 2017), 53.
 Ibid, 50.
 L. Sullivan, "The Tall Office Building Artistically Considered", Lippincott's Magazine (1896), 403–409.
 T. Garcia, M. A. Ohm and J. Cogburn, Form And Object (Edinburgh: Edinburgh University Press, 2014), 19.
 Ibid, 48.
 Ibid, 77-78.
 Ibid, 145.
 E. Husserl, Logical Investigation (London: Routledge & K. Paul, 1970).
 Stanisław Leśniewski, O podstawach matematyki [trans. On the Foundations of Mathematics], I-V, 1927-1930, Przegląd Filozoficzny, 30 (1927), 164–206; 31 (1928), 261–291; 32 (1929), 60–101; 33 (1930), 77–105; 34 (1931), 142–170.
 R. Casati and A. C. Varzi, Parts and Places: The Structures of Spatial Representation (Cambridge, Massachusetts: MIT Press, 1999).
 L. Hilberseimer, The New City: Principles of Planning (P. Theobald, 1944), 74-75.
 D. Koehler, The Mereological City: A Reading of the Works of Ludwig Hilberseimer (Transcript, Verlag, 2016), 182.
 E. Kaufmann, Three Revolutionary Architects, Boullée, Ledoux, And Lequeu (Philadelphia: The American Philosophical Society, 1968).
 N. Leach, "Swarm Urbanism", Architectural Design, 79, 4 (2009), 56-63.
 L. Bryant, The Democracy Of Objects (Open Humanities Press, 2011), 290.
 M. DeLanda, Assemblage Theory (Edinburgh: Edinburgh University Press, 2016), 2.
 Ibid, 19-21.
 Ibid, 12.
 L. Bryant, The Democracy Of Objects (Open Humanities Press, 2011), 273.
 M. DeLanda, "Deleuze And The Use Of The Genetic Algorithm In Architecture" (2001), 3.
Object-oriented programming in blockchain has been a catalyst for philosophical research on the way blocks and their nesting are perceived. While attempting a deeper investigation on the composition of blocks, as well as the environment that they are able to create, concepts like Jakob von Uexkull’s “Umwelt” and Timothy Morton’s “Hyperobject” can be synthesised into a new term; the “Hyperumwelt”. The Hyperumwelt is an object that is capable of creating its own environment. By upscaling this definition of the Hyperumwelt, this essay describes objects with unique and strong compositional characteristics that act as closed black boxes and are able to create large scale effects through their distribution. Hyperobjects are able to create their own Umwelt, however when they are nested and chained in big aggregations, the result is a new and unexpected environment: the Hyperumwelt.
In his book Umwelt und die Innenwelt der Tiere (1921) Uexkull introduced the notion of subjective environments. With the term “Umwelt” Uexkull defined a new perspective for the contextualisation of experiences, where each individual organism perceives surrounding elements with their senses and reinterprets them into its own “Umwelt”, producing different results. An Umwelt requires two components: an individual and its abstracted perception of its surroundings. Based on this process and parameters, notions of parthood and wholeness in spatial environments, and the relations that they produce with interacting elements, become relevant.
Space as a Social Construction
For Bill Hillier and Julienne Hanson these two parameters related to society and space, writing that “society can only have lawful relations to space if society already possesses its own intrinsic spatial dimension; and likewise space can only be lawfully related to society if it can carry those social dimensions in its very form.” What Hillier and Hanson argue is that the relation between the formation of society and the space is created by the interaction between differing social environments. Hillier and Hanson essentially make use of a mereological definition of the environment that states that parts are independent of their whole, the way that society is independent from its space, but at the same time societies contain definitions of space. Space is therefore a deeply social construction.
As Hillier and Hanson outline, our understandings of space are revealed in the relations between “social structure” and “spatial structure”, or how society and space are shaped under the influence of each other. Space is a field of communication. Within a network of continuously exchanged information, space can be altered as it interacts with the people in it. However, this approach can only produce limited results as it creates environments shaped by only two parameters, humans and space. At this point is where Hillier and Hanson’s theory fails, as this way of understanding the environment relies only on additive information produced by interactions. If we were to expand this theory into the kind of autonomous learning mechanism that is mandatory for processing today’s computational complexity, we would end up with a slow, repetitive operation between these two components.
Hyperobjects to Hyperumwelt
Another perspective that is elusive from Hillier and Hanson’s understanding of the environment is how social behaviour is shaped by spatial parameters. Timothy Morton’s object-oriented ontological theory contradicts this anthropocentric understanding of the world. In The Ecological Thought (2010) Morton presents the idea that not only do we produce the environment but we are also a product of it. This means that the creation of things is not solely a human act non-human objects cannot partake in, but rather an inherent feature of any existing object. For Morton, complexity is not only a component of society and space, but extends complexity to an environment that has objects as its centre and thus cannot be completely understood. He calls these entities ‘Hyperobjects”.
While Morton uses the term Hyperobject to describe objects, either tangible or intangible, that are “massively distributed in time and space as to transcend spatiotemporal specificity”. The term can be reinterpreted to describe an environment, rather than an object, which is neither understandable nor manageable. This environment – a Hyperumwelt – is the environment constructed by Hyperobjects. A Hyperumwelt is beyond comprehension due to its complexity.
The term Hyperobject is insufficient as it retains its own wholeness. This means that all components inside a Hyperobject cannot be seen (as it acts like a black box of information) but can only be estimated. Morton described the Hyperobject as a whole without edges. This stems from Morton’s point of perception, as he puts himself inside of the object. This position makes him unable to see its wholeness and thus it leaves him adrift of its impact, unable to grasp control of it. Here, also, the discussion opens about authorship inside the environments and what Morton suggests is that Hyperobjects have their own authority and there is nothing that can alter them or specify their impact on the environment.
A Tree in a Forest
Yet there is also no need for the Hyperobjects to be clearly understandable. In terms of the Hyperumwelt, Hyperobjects can remain vast and uncomprehended. What is now needed are the implications of distributing nested Hyperobjects, seen as black boxes, inside an environment. An Umwelt is an environment constantly altered by the perceived information. This makes the Hyperumwelt a whole with porous edges that allows the distribution, and the addition or subtraction, of information. Another difference is the external position that the Hyperumwelt is perceived from, meaning that there is no need for it to be part of the environment. Since what is important is the distribution of the objects within the Hyperumwelt, a distant point of view is needed in order to detect the patterning of the distributed objects. While it will remain difficult to decipher and discretise the components, the patterns that are created can be seen.
While the Hyperobject is a closed whole of parts that cannot be altered, a Hyperumwelt is an open whole of wholes that uses objects as its parts. So, while the Hyperobject gives us no authority over its consequences, the Hyperumwelt bypasses this in order for its wholeness to be controlled. Yet what is important for the Hyperumwelt is not the impact of one object, but the impact of multiple objects within the environment. This synthesis and merging of objects and their relations produces a new reality which may or may not be close to the reality of the single objects. A Hyperobject is looking at a black box – say, a tree – and knowing there is a pattern – such as a forest – and a Hyperumwelt is looking at the tree and knowing the impact that it has on the forest and the impact that the forest creates in the environment.
 J. von Uexküll, Umwelt und Innenwelt der Tiere (Berlin: J. Springer, 1909), 13-200.
 T. Morton, Hyperobjects: Philosophy and Ecology After the End of the World (Minneapolis, Minnesota: University of Minnesota Press, 2013).
 J. von Uexküll, Umwelt und Innenwelt der Tiere (Berlin: J. Springer, 1909), 13-200.
 B. Hillier and J. Hanson, The Social Logic of Space (London: Cambridge University Press, 1984), 26.
 T. Morton, The Ecological Thought (Cambridge, Massachusetts: Harvard University Press, 2010).
 Ibid, 110.
 T. Morton, Hyperobjects: Philosophy and Ecology After the End of the World (Minneapolos, Minnesota: University of Minnesota Press, 2013).
 T. Morton, Being Ecological (Penguin Books Limited, 2018).
In mereology, the distinction of “dependent” or “independent” could be used to describe the relationship between parts and wholes. Using a mereological description, individuals can be seen as self-determining entities independently identiﬁed by themselves as a whole. On the other hand, the identities of collectives are determined by their group members which participate in a whole. Therefore, based on parthood theory, an individual could be deﬁned as a self-determined “one in a whole”; in contrast, collectives could be seen as “a part within a whole”. Following the mereological logic, this paper surveys the new term “codividuality”, a word consisting of the combined meaning of “collective” and “individuality”. Codividuality preserves the intermediate values of individualism and collectivism. It consists of the notion of share-ability beneﬁted from collectivism, and is merged with the idea of self-existence inspired by individualism. The characterisation of codividuality starts from individuals that share features, and are grouped, merging with other groups to compose new clusters.
“Codividuals” could also be translated into “parts within parts”. Based on this part-to-part relation, codividuals in the sense of composition begin with existing individuals and then collectives of self-identiﬁed parts. Parts are discrete, but also participating entities in an evolving self-organising system. Unlike individuals’ self-determination, parts’ identities contribute by participating, forming a strong correlation in-between parts but preserving autonomy of parts. In codividuality, each individualistic entity obtains the potential of state-transforming by sharing its identity with others; as such, all parts are able to translate one another, and are irreducible to their in-between relationship. From an ontological perspective, the existence of a part is not from adding a new object but by sharing features to fuse itself into a new part. A new part does not contribute by increasing an entity’s quantity but through a dynamic overlap transforming over time. Since the involved entities fuse into new collectives, the compositing group will simultaneously change its form by corresponding to sharing features; as such, codividuality could be seen as an autonomous fusion.
Metabolism: As One in Whole
According to the definition of individualism, each individual has its own autonomous identity and the connectivity between individuals is loose. In architecture, social connectivity provides insight on the relationship of spatial sequences within cultural patterns. Metabolism, as an experimental architectural movement in post-war Japan, emerged with a noticeable individualist approach, advocating individual mobility and liberty. Looking at the conﬁgurations and spatial characteristics in Metabolist architecture, it is easy to perceive the features of “unit” and “megastructure” as the major architectural elements in the composition, showing the individualistic characterisation in spatial patterns. Megastructure as an unchangeable large-scale infrastructure conceptually served to establish a comprehensible community structure. The unit as a structural boundary reinforced the identity of individuals in the whole community.
The Nakagin Capsule Tower (1970) by Kisho Kurokawa is a rare built example of Metabolism. It is a residential building consisting of two reinforced concrete towers, and the functional equipment is integrated into the megastructure forming a system of a core tower that serves its ancillary spaces. The functional programmes required for the served spaces are extended from the core where the structure and pipes are integrated. The identical, isolated units contain everything to meet basic human needs in daily life, which expresses the idea of individualism in architecture that is aimed for a large number of habitants. The independent individual capsules create a maximum amount of private space with little social connectivity to neighbours.
Constructivism: As Parts in Whole
Collectivism could be applied to a society in which individuals tie themselves together into a cohesion which obtains the attributes of dependence, sharing and collective beneﬁt. This is aligned to the principles of constructivism, proposing the collective spatial order to encourage human interaction and generate collective consciousness. In contrast to the Metabolists, constructivist architecture underlined spatial arrangements for public space within compressed spatial functions that enable a collective identiﬁcation.
The Narkomﬁn Building (1928–1932) by OSA Group is one of the few realised constructivist projects. The building is a six-story apartment located in a long block designed as a “social condenser”. It consists of multiple social functions that correspond to speciﬁc functional and constructive norms for working and living space within whole community. The main building is a mix-use compound with one part for individual space and another designed as collective space. The private and common space are linked by an exterior walkway as a communal rooftop garden. There are 54 living units, and each of them only contain bedroom and bathroom. Each ﬂat could be divided into two, one in which contains a playground and kitchen; the other one, a collective function area, which consists of garden, library and gymnasium. The corridors linking the ﬂats are wide and open, appearing as an urban street to encourage inhabitants to stop and communicate with their neighbours.
Compared with the Nagakin Capsule Tower, the concept behind the spatial arrangement of Narkomﬁn Building is the collectivism of all needed programs. The large-scale collective was proposed as a means to replicate the concept of the village in the city. Practically this allows for a shrinking of the percentage of private space while stimulating the social interaction within the collective living space. The concept of amplifying communal space aligns to the constructivist movement through the concept of reinventing people’s daily life by new socialist experimental buildings, reinforcing the identity of collectives within the whole community.
Codividuality: As Parts in Parts
In architecture, the word “codividuality” originally emerged in the Japanese architectural exhibition House Vision (2019) to refer to collective living in terms of the sharing economy, delivering a social meaning: “creating a new response to shared-living in the age of post- individualism”. Economically speaking, codividuality expresses the notion of share-ability in sense of sharing value and ownership. Moreover, it offers a participatory democracy for spatial use in relationship to changing social structures and practices. The architectural applications of codividuality are not merely about combined private space with shared public facilities but reveal a new reality that promotes accessibility and sustainability in multiple dimensions, including spatial use, economy and ecology.
Share House LT Josai (2013) is a collective-living project in Japan, offering an alternative for urban living in the twenty-first century sharing economy. Due to the change of demographic structure and rapidly rising house prices, Naruse Inokuma Architects created an opportunity to continually share spaces with unrelated people by creating an interactive living community in a two-and-a-half-story house. The 7.2 square meter individual rooms are three-dimensionally arranged across the two and a half levels. Between the bedrooms are the shared spaces, including a void area and an open plan living platform and kitchen that extend toward identical private rooms. The juxtaposition of private and communal spaces creates a new spatial conﬁguration and an innovative living model in the sharing economy. Codividuality obtains individuals’ autonomy and, on the other hand, encourages collective interaction. It is not an opposition to individualism nor a replication of collectivism, but a merged concept starting from individualism, then juxtaposing against the notion of collectivism.
Autonomy of Parts
In contemporary philosophy, “Object Oriented Ontology” (OOO) proposes a non-human way of thinking, unshackling objects from the burden of dominant ideologies. Objects are withdrawn from human perception, thereby containing the autonomy and irreducibility of substance. Accordingly, what this autonomy is based on is the independence of the object itself. An individual object is not reliant on any other objects, including humans. Objects exist whether we are aware of them or not. Objects do not need to passively rely on human cognition to represent themselves, but self-evidently and equally stand in the world.
OOO enables a transition in architectural meaning from architecture as autonomous objects to interactive relationships between object and field, where indirect relations between autonomous objects are observed. In an ecological sense, the reason behind this shift could be understood as an irreducibility of the architectural relationship within the environment; in other words, an architectural object cannot be withdrawn from its relation to context. As Timothy Morton writes, “all the relations between objects and within them also count as objects”, and David Ruy states in his recent essay, “the strange, withdrawn interaction between objects sometimes brings forth a new object.” Ruy emphasises the relation between objects based on a dynamic composition interacted with by individuals that is not a direct translation of nature.
In an object-orientated ontology, architecture is not merely an individual complete object but fused parts. This could be translated into a mereological notion of shifting from wholeness to parts. As a starting point for a design methodology, extracting elements from buildings represents loosening the more rigid system found in a modernist framework, by understanding architectural parts as autonomous and self-contained. Autonomous architectural elements cannot be reduced to the individual parts that make up the whole. This shift opens up an unprecedented territory in architectural discourse. Autonomous architectural parts now can participate in a non-linear system involving not only input or output, beginning or end, or cause or result; architecture can be understood as part of a process.
Architecture in the Sharing Economy
The rise of the sharing economy in the past decade has provided alternatives to the traditional service economy, allowing people to share and monetise their private property and shift thinking around privacy. In this context the following question arises: how could mereological architecture reveal new potentials beyond the inhabitation of buildings by engaging with the sharing economy? Due to the financialisation of the housing market and, simultaneously, the standardisation and lowering of quality of housing standards due to deregulation of the market, this question is even more pressing. Furthermore the bureaucracy of the planning system limits the architectural designing process by slowing development down and restricting innovation. In this context the reconfiguration of housing to emphasise collective space could be an alternative living model, alongside financial solutions such as shared ownership.
Decentralised Autonomous Organisation
The notion of a Decentralised Autonomous Organisation (DAO) seems fitting for furthering this discussion. In economic and technological terms, DAO is a digital organisation based on blockchain technologies, offering a decentralised economic model. As an alternative to centralised economic structures within a capitalist system, DAO beneﬁts from blockchain technology as a digital tool for achieving a more transparent, accessible and sustainable economic infrastructure. This involves shifting decision-making away from centralised control and giving the authority to individual agents within the system.
In the Medium article “The Meaning of Decentralisation” by Vitalik Buterin, Buterin describes a decentralised system as a collective of individual entities that operate locally and self-organise, which supports diversity. Distribution enables a whole to be discretised into parts that interact in a dynamic computing system that evaluates internal and external connectivity between parts. Through continuous interaction, autonomous discrete entities occasionally form chains of connectivity. In this process the quantities of parts at junctions continuously change. Over time patterns emerge according to how entities organise both locally and globally. Local patterns internally influence a collective while global patterns influence between collectives – or externally in a field of patterns – similar to Stan Allen’s notion of a “field condition”. This creates global complexity while sustaining autonomy through local connectivity.
Codividuality could be seen as a post-individualism, where a diverse self-organising system withdraws power from capitalist authorities. The process of decentralisation characteristic of DAO is key to codividuality for it allows repeated patterns to form in a connected network. Architecturally, in codividual space each spatial unit consists of an open-ended program and self-contained structure, which means that architectural elements such as walls or slabs exist not for a specific function but serve a non-representational conﬁguration.
Through computing codividual connectivity, autonomous spatial units start to overlap with other units, generating varying states of spatial use and non-linear circulation. What this distribution process offers is an expanded field of spatial iterations, using computation to respond to changes in quantity or type of inhabitants. In this open-ended system, codividual parts provide each spatial participant the capability to overcome the limitation of scalability through autonomous interconnection supported by a distributed database.
Unlike conventional planning in a modernist framework, codividual space does not aim for a module system that is used for the arrangement of programme, navigation or structure but for a non-figurative three-dimensional spatial sequence. The interconnections between parts and the ﬁeld enable scalability from the smaller scale of spatial layouts towards large-scale urban formations. This large-scale fusion of codividual space generates a more fragmented, heterogeneous and interconnected spatial order, balancing collective benefit and individual freedom. In this shifting towards heterogeneity, codividuality opens a new paradigm of architecture in the age of the sharing economy.
 H. C. Triandis, Individualism And Collectivism (Boulder: Westview Press, 1995).
 “Mereological Thinking: Figuring Realities within Urban Form,” Architectural Design, 89, 2 (2019), 30–37.
 Z. Lin, Kenzo Tange And The Metabolist Movement (London: Routledge, 2010).
 D. Udovicki-Selb, M. J. Ginzburg, I. F. Milinis. Narkomfin, Moscow 1928-1930 (Tübingen: Wasmuth Verlag, 2016).
 "HOUSE VISION", HOUSE VISION (2019), http://house-vision.jp/, accessed 9 May 2019.
 L. Bryant, The Democracy of Objects, (Open Humanities Press, 2011).
 T. Morton. The Ecological Thought (Cambridge: Harvard University Press, 2010).
 D. Ruy, “Returning to (Strange) Objects”, TARP Architecture Manual: Not Nature. (Brooklyn, New York: Pratt Institute Graduate School of Architecture, 2015).
 V. Buterin, “The Meaning of Decentralization” (2017), https://medium.com/@VitalikButerin/the-meaning-of-decentralization-a0c92b76a274, accessed 9 May 2019.
 S. Allen and G. Valle, Field Conditions Revisited (Long Island City, NY: Stan Allen Architect, 2010).
This interview took place on April 11th, 2017 at the office of Herman Hertzberger in Amsterdam, with questions by Daniel Koehler.
Daniel Koehler: After all your years as a teacher, maybe it would be a good departure for conversation if you can tell us what is your favourite exercise for teaching architecture?
Herman Hertzberger: Well, my favourite exercise is making a housing environment, where small children could live and play outside. This is an old-fashioned thing, but I am absolutely convinced that children should play in the streets in order to find out about the world and to learn about the good and the bad things that exist. I am afraid that today in urbanism you find high-rises, and the immense distance from the living unit to the street is a problem. Consequently, cities only have playgrounds with fences around, and there children are safe to play. But the world is not only about safety, the world is about finding out how far you can go in your life.
Can you tell us a little bit about how you began to communicate as a group during the beginning of structuralism?
We started a school! We had seminars for discussion, where everyone brought in something to discuss. And then we had the Forum editorial staff. There was Aldo van Eyck, Jacob Bakema and others who are less well known (Dick Apon, Joop Hardy, Jurriaan Schrofer and Gert Boon). There was an enormous amount of communication. Every Tuesday night we saw each other with no exception, where we discussed the next issue of the magazine.
And, what was the relevant media at the time you started to develop your ideas?
Magazines were very important. We used to have three or four Dutch magazines, two French, two English, one American, two Swiss, two Italian. They were all on the table. “Did you see that building? I think it is good.” And then we had a discussion. Today we still get some magazines, but today you get all the information from around the world in one click. That is fantastic, the possibilities today are just immense. It is more information, a lot of pieces of an enormous puzzle. But is it also possible to put it together? I hope the younger generation can.
The magazine Forum was for me a sort of postgraduate study. At that moment I started to see the work, the hands and eyes of real architects. That helped me to start thinking. And there were connections to many other architects from all over. There were conferences, and there you saw people. At Delft University, where I was teaching, we invited all the architects we were interested in. We are still doing this.
When one reads the current literature, one can draw two different issues. On the one hand a common critique of functionalism in architecture, and on the other hand, the influence of new ideas coming from sociology. Would you say that this enormous explosion of ideas and diversity of projects was a response to architectural problems or were these new concepts coming from other fields prescriptive to your projects?
First of all, there is nothing coming from sociology. I have little or no connection to sociology. Sociology is the science of human relations. But you do not need to go into this science as an architect. Architecture is a matter of using your eyes and ears to look into the world and see what needs to be done. But today, architecture is driven by algorithms and rules. All the rules, saying you should do this or that, you are supposed to not do this. Architecture is then reduced to problem-solving. You must be aware of that mistake. Architecture is not problem-solving. Of course, you have to solve problems, but this is only one aspect of architecture. It starts to be architecture when it provides more freedom to people, opening the possibility that things are getting better than they were before.
Can you give an example?
A dwelling needs to have a balcony. Why? To let people go outside, and there are rules about the size of your balcony. Most architects think: well, I included a balcony. But they should base the form and dimensions of a balcony on the needs of daily life. Such as sitting in a corner without being seen by others or not being disturbed while reading your book yet with the possibility of having contact with your neighbour. Second, you may want to be able to have a meal with your family. You maybe want to have flowers and plants. In a way, this is culture. Make that list, and when you design a balcony, be sure that all the points you have listed have also been fulfilled. In this way you increase people’s freedom. Most balconies do not do that. On most balconies all you can do is sit. Most architects don’t think, they don’t look at what is going on. And then, of course, the developer says, “It should not cost more, so we have to make it small.” So I have rules independent of the developers. For me, it starts to become good when those rules are going to be met. This method works for every part of the building, from a dwelling, to a living unit, to a street, to the school. In a school, you can design where the black board in a classroom is going to be. And you have to think about what a school might be. I don’t need sociologists for that. Sometimes, sociologists can tell you some interesting things, but you have to think, and in the first place, look for yourself.
Your communal spaces are famous for their human scale, like the doorstep. I think that this down-scaling of the city to elements of a building enables you to design the building as an open system. For me, it seems you draw a difference in creating a building as a building and designing a building as a city.
For me, city, architecture, and building are very much related. Aldo van Eyck believed that making architecture is always making things more inside than before. Aldo van Eyck said, “Whatever you do, it is supposed to always increase the inside quality.” When you want to go outside, you go to the fields. There you have the horizon, you have the clouds and the openness. A city is for exchange – exchange of goods, of ideas. Cities are mostly based on trade, and on having a cinema, having shops, having communal things, being together.
Aldo van Eyck also claimed that the city should be a big house. I think that is a dangerous thing to say because the city is not the house where you are yourself, or where you are enclosed. The city should never be enclosed but always open, in connection with the whole world. It is the place where you see the airplanes flying above you. But it is an inside space in relation to the open field. And a building is, in fact, a small city. Make a building as a small city to have the emphasis lay on communication and exchange.
But most buildings are private territories with public corridors. How narrow can a public corridor be? It cannot be a centimetre larger, because this would cost money. Means of communication are considered extra. You can sell the dwellings but not the corridors. As a result, most buildings have very beautiful apartments and very small corridors. I am pleading for buildings where the corridors are streets. I try to put more emphasis on the communal spaces in a building.
When you consider a building as an open system, what role does the boundary between inside and outside have? Do you think that these open systems have an outside or do you think of them as endless? What is their relationship to the context and environment?
City is not just buildings but the space in between the buildings as well. The edge of building is forming the space of the city. You have to conceive of the edge of a building not as an end where the outside starts. You must see it the other way around, as a wall in the interior space of the city. The idea of the building as city is to put buildings in such a relation that the space in-between them is as important. This is something that is completely lost. It is also considered nostalgic. But look to New York. In New York, you have these high skyscrapers, but you also have very nice streets. When I am in Manhattan, I feel quite enclosed. That is because of the very strict system of the grid, and the building lines by which the streets are defined, and the blocks in between are open.
In one of your articles [Open City, 2011], you rightly point out that most of today’s housing projects consciously exclude communal spaces, and focus only on the assembly of private areas without any spatial linkages between them. Private areas are protected to one another rather than connected. A common – and I think dangerous – justification for such a design refers to changed economic circumstances, and most cynically, to the death of the welfare state. Would you say architecture is so dependent on economics?
Every square meter is supposed to generate a fee, so public space will be reduced to a minimum. Architecture has become business. And that makes the position of the architects to contribute to better spaces and towns very difficult.
But then architects are even more important.
Important as long as you are able to be aware of what sort of culture you are living in. I cannot give you the answer what to do. You have to explain and fight. But you need clients who believe in the architect. Things are very materialistic today. But there are also very interesting initiatives. For example, in Rotterdam, you have these old industrial halls which could be reused without high costs. Add a little paint, and it works. There are ways today that are contradicting this idea of architecture as economics. There is a lot for you to invent.
When I told a friend that we will visit you as one of the main protagonists of structuralism, his response was: ‘Wait a moment! Herman Hertzberger is not a structuralist; he is a humanist.’
Can you not be a structuralist and a humanist at the same time? Is this contradictory?
I think what my friend was pointing at is that there is a difference between structuralism as a style and structuralism as modus operandi, as a form of organisation and composition.
Style has to do with aesthetics, but aesthetics is a pitfall. Most architects think making something beautiful is all that architecture is about. But you can’t make something beautiful, it is impossible. That doesn’t work. What you can do is make a painting which is striking, and shows you something you never saw before that makes you happy or fall in love with the painting, and then we decide this is a beautiful painting. But in architecture, don’t spend energy on trying to make something beautiful. Make it work. Then you may hope that someone says this is beautiful. For instance, the composer of music Arnold Schönberg said, “Do not do what others consider beautiful, but just what is necessary to you.” I like a building because it works. When someone if I think it is beautiful, then I say, when you are in love it is going to be beautiful. Beauty comes as a result. But you cannot say, now I am going to make it beautiful. Beauty is a pitfall for architects.
Structuralism means there are simple rules that enlarge the amount of free space that you can achieve. I took the grid as an example earlier. The very rigid system of the grid allows you to be more free in the blocks in between. All of the blocks can be different; some high, some low. It is an enormous mosaic of possibilities that is held together by the grid. When you know what rules you have to use, you can be creative. It is a misunderstanding that the one contradicts the other.
It is interesting that you describe a rule as a form of enclosure, as a form of an inside.
If something is not limited you create chaos. Rules prevent you from chaos, and within rules you can be creative. Noam Chomsky [the linguist] uses the words competence and performance. The structure of language is its competence, it is its capacity to express. And performance is what you are actually expressing with it. In language you have grammar, but every individual talks in his or her personal way using the same rules.
Would you say that you have a grammar and vocabulary then? Do you have certain elements that you are frequently using? You were talking earlier about balconies and streets. In your work do you consider elements repeat structurally, which can re-emerge in different styles, but with similar performance? Or do you begin each project with a new grammar?
I do not use the same grammar for every building. I could, but I want to try different things. There are many people who thought housing should be produced in a factory, like cars. It is such a simple idea. But it doesn’t work, because every location has its own needs, whereas a car is the same everywhere. So, you can not use the same grammar. I use another grammar for a school and another for housing for instance. Some things have a similar grammar, like how you make a door, which works in most cases.
Do you have a particular vocabulary of elements that reappear during your career in different articulations and styles?
Architecture should accommodate people and things that people are concerned with. I use this everywhere. To give you a simple example: when I make a column, most of the time I design it with a base for people to be able sit on it. This is for me an accommodating device. It always works. This sort of thing is universal in my opinion: the idea of accommodation. Another example is the handrail of a stair. I always make a handrail that guides you where to go, making the end of it in such a way that even without looking you have the feeling that this is the end of the stairs. Everything I do tries to consider how it works for people. However the point is that it should be friendly to people, but not soft.
Friendly architecture! This is a wonderful conclusion. Thank you, Herman Hertzberger for sharing your time and thoughts with us.