Welcome to Prospectives.
Prospectives is an open-access online journal dedicated to the promotion of innovative historical, theoretical and design research around architectural computation, automation and fabrication technologies published by B–Pro at The Bartlett School of Architecture, UCL. It brings the most exciting, cutting-edge exploration and research in this area onto a global stage. It also aims to generate cross-industry and cross-disciplinary dialogue, exchange and debate about the future of computational architectural design and theoretical research, linking academic research with practice and industry.
Featuring emerging talent and established scholars, as well as making all content free to read online, with very low and accessible prices for purchasing issues, Prospectives aims to unravel the traditional hierarchies and boundaries of architectural publishing. The Bartlett supports a rich stream of theoretical and applied research in computational design, theory and fabrication. We are proud to be leading this initiative via an innovative, flexible and agile website. Computation has changed the way we practice, and the theoretical constructs we use – as well as the way we publish.
Prospectives has been designed to be a part-automated, part-human, multiplicitous platform. You may come across things when using it that do not feel, well, quite human. You may not realise at first that you are looking at something produced by automation. And because every issue is unique yet sitting within a generative framework this may mean you see the automation behind Prospectives do things that humans may not do.
Furthermore how you engage with Prospectives is largely left up to the reader. You can read our guest-curated issue, and use the tags to generate your own unique issue – an ‘issue within an issue’ – or read individual articles. You can also suggest new tags to be adopted by articles. We hope this provokes new ways of thinking about the role that participation, digitisation and automation can play in architectural publishing. Prospectives in a work-in-progress, and its launch is the first step towards fulfilling a vision for new kinds of publishing platforms for architecture that play with, and provoke, the discourse on computation and automation in architectural design and theory research.
Issue 01: Mereologies
“Mereologies”, or the plural form of being ‘partly’, drives the explorations bundled in the first issue of Prospectives, guest curated by Daniel Koehler, Assistant Professor at University of Texas at Austin, previously a Teaching Fellow at The Bartlett School of Architecture from 2016 to 2019.
Today, architects can design directly with the plurality of parts that a building is made of due to increased computational power. What are the opportunities when built space is computed part-to-part? Partly philosophy, computation, sociology ecology and partly architecture, each text – or “mereology” – contributes a particular insight on part relations, linking mereology to peer-to-peer approaches in computation, cultural thought, and built space. First substantiated in his PhD at the University of Innsbruck, published in 2016 as The Mereological City: A Reading of the Works of Ludwig Hilberseimer (transcript), Daniel’s work on mereology and part-hood – as an nuanced interplay and blurring between theory and design – has been pivotal in breeding the ground for an emerging generation of architects interested in pursuing a new ethical and social project for the digital in architecture. The collection of writings curated here included postgraduate architecture and urban design students (both his own, and others), architecture theorists, designers, philosophers, computer scientists and sociologists. The interdisciplinary nature of this issue demonstrates how mereology as a subject area can further broaden the field of architecture’s boundaries. It also serves as a means of encapsulating a contemporary cultural moment by embedding that expanding field in core disciplinary concerns.
The contributions were informed by research and discussions in the Bartlett Prospectives (B-Pro) at The Bartlett School of Architecture, UCL London, from 2016 to 2019, culminating in an Open Seminar on mereologies, which took place on 24 April 2019 as part of the Prospectives Lecture Series in B-Pro. Contributors to this issue include: Jordi Vivaldi, Daniel Koehler, Giorgio Lando, Herman Hertzberger, Anna Galika, Hao Chen Huang, Sheghaf Abo Saleh, David Rozas, Anthony Alvidrez, Shivang Bansal and Ziming He.
Prospectives has been a work-in-progress for almost 10 years. The dream of Professor Frédéric Migayrou (Chair of School and Director of B–Pro at The Bartlett School of Architecture) when he arrived at The Bartlett in 2011, I became involved in the project when I joined the School 1 year later. It has been a labour of love and perseverance since. It is due to the fervent and ardent support of Frédéric, Professor Bob Sheil (Director of School), and Andrew Porter (Deputy Director of B–Pro) that this project later received funding in 2018 to formalise the development of Prospectives. To the B–Pro Programme Directors Professor Mario Carpo, Professor Marcos Cruz, Roberto Bottazzi, Gilles Retsin and Manuel Jimenez: I am thankful for your guidance, advice and friendship which has been paramount to this project. Colleagues such as Barbara Penner, Yeoryia Manolopoulou, Barbara Campbell-Lange, Matthew Butcher, Jane Rendell, Claire McAndrew, Clara Jaschke and Sara Shafei have all given me an ear (or a talking to!) at various stages when this project most needed it.
Finally, it is important to say that schools of architecture like the Bartlett have cross-departmental and cross-faculty teams who are often the ones who breed the ground for projects such as Prospectives to be possible. The research, expertise and support of Laura Cherry, Ruth Evison, Therese Johns, Professor Penelope Haralambidou, Manpreet Dhesi, Professor Laura Allen, Andy O’Reilly, Gill Peacock, Sian Lunt and Emer Girling has been vital – thank you.
“One must turn the task of thinking into a mode of education in how to think.”
These words from the philosopher Martin Heidegger point towards new modes of thinking. As architects, one can mention Mario Carpo’s remark about the huge amounts of data that are available for everyone nowadays: most of it is underused. As this essay will argue, this new condition of Big Data, and the digital tools used to comprehend and utilise it, can trigger an entirely new way of thinking about architecture. It is a way to both open doors for testing, and an opportunity to look back into history and re-evaluate certain moments in new ways. As an example one can take Brutalism, which emerged as a post-war period solution in the 1950s. It was a new mode of thinking about architecture that was influenced by Le Corbusier’s Unité d’Habitation de Grandeur Conforme, Marseilles (1948–54), the Industrial Revolution and the age of the mechanical machine. Brutalism can be read as the representation of reading the building as a machine at that time. Luciana Parisi has expanded on this idea, writing that Brutalism can be considered as the start of thinking about architecture as a digital machine, having removed any notion of emotion from the architectural product, leaving a rough mass of materials and inhabitable structures. In Parisi’s sense, brutal architecture can then be read as a discrete system of autonomous architectural parts brought together with a set of rules: symmetry, asymmetry, scales, proportions, harmony, etc. These rules, materials and structures act autonomously using collective behaviours to produce data. The data can then be translated into concrete compositional elements which form a building, a city or a whole territory. The adjacencies between each discrete compositional element creates the relations between those parts.
The Building Thinking Machine
The building as a machine departs from Le Corbusier’s claim for a functional architecture. Today, the use of machine learning and artificial intelligence means that machines are no longer used only for making. They are thinking machines. This allows a new translation of Le Corbusier’s understanding of function, asking the questions: what if architecture acts as a mode of thinking? How would a building as a thinking machine perform?
The generation following Le Corbusier progressed the building machine. Regner Banham linked the building machine to comfort and the environment, seeing the building as a kit of tools that provide comfort. In other studies, Reyner Banham proposed the building as a package which is totally enclosed and isolated from the external environment, referring to this as “the environmental bubble”. He proposed that surrounding the building with one thick layer that protected the internal space was the best solution to provide a well-tempered environment. Yet Banham presents a clear separation between the interior and exterior spaces which no longer matches the complexity of interior-exterior relationships at both urban and architectural scales.
Mereological Reading of Architectural Precedents
Different types of systems that provide for a well-tempered environment inside the building distinguish difference between inside and outside as the difference between a well- and non-tempered environment. Mereology, or the study of parts-relations, can be used as a methodology to read a building in terms of its compositional aspects.
One historical example is the Rasoulian House (1904) which was designed to provide a state of comfort for its users throughout the year. A basic architectural element known as the wind catcher tower, or Malqaf, provided the building with breeze. As Sarinaz Suleiman described, the Malqaf is a composition of architectural elements that work together to create air flow. These elements include walls, doors, rooms and include the basement and the courtyard, organised in a specific order, proportions and orientation to create specific relationships between the inside and the outside.
The Malgaf is the first point at which air flow enters the building. It then travels down a shaft which is the first interior space that the wind interacts with. The air continues to a second interior space through a window-like opening into a room, and then is moved through an opening in the room’s floor to a cellar space under the building. This third interior space is the coolest space in the building. The cellar is connected to the courtyard through an opening that facilitates air circulation and absorbs wind. For this to happen, two kinds of relationships need to exist: the exterior relation formed by the geometry of two elements, e.g. the height of the Malqaf and the width of the courtyard which help to create a high difference in air pressure, and an internal relation which is controlled by the openings between the interior spaces and between interior and exterior spaces as well. Ventilation is not only a void space, but another level of interiority inside the building.
Another example of a complex ventilation system is a data centre building. Data centres usually produce vast thermal exhaust which requires constant air movement, requiring large depths to ceilings and floors which may be as big as the building itself. Servers are positioned in the room with a certain distance between each other. This distance is related to the degree of temperature and the air circulation speed. Higher temperatures inside the room are used to decrease air pressure and create a pressure difference that enables air circulation naturally in the room. The path that the air travels allows the air the time it needs to cool down naturally.
Hundreds of years ago, Vitruvius described wind, saying that “wind is a flowing wave of air with an excess of irregular movements. It is produced when heat collides with moisture, and the shock of the crash expels its force in a gust of air.” Vitruvius’ definition can be deconstructed into two parts, the first of which deals with the dominant wind direction and its relation to the outer envelope of the building. This concept was emphasised by Vitruvius’ example of the Octagon Marble Tower (15th century BC). The second part relates to the process of creating wind flow in nature. Vitruvius explains that air circulation occurs when two different air pressures encounter each other. The difference in the air pressure always happens as a result of changes in temperature and moisture. High temperature heats up the air causing low density and consequently low pressure areas, and lower temperature will create a high pressure area. This concept is the logic that has been followed in all passive ventilation systems throughout history. These systems tend to create two points with a high difference in pressure, connecting these two points with a path that needs to be ventilated. This path would then move through the building accelerating air movement from the high pressure area to the low pressure area creating air flow inside the building.
A traditional building from the Middle East can be taken as a case study for applying thermodynamic logic to create natural air circulation in a building. In the previous example of Rasolian House, the side that is exposed to the sun is heated up by the sun. Consequently, air pressure decreases. The geometry that is exposed to the sun creates shadowed areas inside and outside of the building. These shadowed areas are much colder and have high air pressure. Air circulates from the high pressure to low pressure areas. That means air can move from a cooler courtyard to an upper space located above it. This air movement absorbs the air from inside the building to fill in the void in the courtyard that the high pressure air had left behind when it moves upwards. Due to the opening at the top of the shaft, air will enter the building to fill in the void that the inner air has left behind as well. This air replacement creates the air circulation inside the building. The creation of wind is dependent on the design of the inner space and its relation to the outer space through openings. This means that, by closing and allowing openings, wind can be created or stopped, and by changing some openings, the wind flow path can be changed, and wind speed could increase or decrease. This follows a logic of discrete, combinatorial air flow.
Computation Ventilation on the Urban and Architectural Scale
The building can be seen as a machine for creating an environmental condition through compositional thinking. This way of thinking turns the building, in the case of the Malgarf, into a switch that can turn the air flow on and off. In this instance, the creation of wind is entirely dependent on a series of elements that are well- organised and ordered. From this combinatorial thinking, wind can be read as a form of pre-digital computation considering the inside-outside sequences as what causes the air flow.
The order of inside-outside also plays an important role in disrupting air flow. A single element that has been extracted from a building can serve as an example. It is a corridor, but at the same time this element plays a crucial role in creating wind. The way that the walls are arranged produces a contrast between the inside-outside spaces. Moreover, the design and arrangement of the openings turns the corridor into a path for air. Taking this element as a discrete part, and rearranging its parts within the same local rules that have been set over the ventilation logic, another version of the element emerges. Following this same logic would give different versions of different elements. Further on, each version of each element has its discreteness and can be upscaled. With this upscaling strategy, more complex interiors emerge.
By integrating an environmental aspect within the design process, a new type of building that embraces another wind geometry can be created. This provides an opportunity to design highly dense architectural forms that can reassure the qualities of the internal space. By nesting interiors one can create different low and high pressure areas over inside-outside sequences.
This allows a rethinking of the inside-outside arrangement in the city according to what positive or negative sequences are created. For example, for more similar interiors less contrast in air pressure needs to be produced. For more variation between the interiors, the contrast in the air pressure needs to increase and more air will flow. An air circulation concept can be used as a means to arrange both interior and exterior spaces in the building and in the city.
Achieving Banham’s Campfire
At an architectural scale the interior-exterior relation can also be managed by the building façade. The façade tends to be used to provide separation between indoor and outdoor spaces as well as between a tempered and non-tempered environment in order to achieve comfort. However, a new understanding of wind circulation can provide a well-tempered environment regardless of the façade. In other words, façade here can be seen as the tools or the elements that provide comfort and facilitate air circulation inside the building.
A façade needs to meet specific criteria in order to generate a difference in air pressure just like the inside-outside arrangement in the city scale. Three design parameters can support this: the orientation of the elevation in relation to the sun, the number of layers that are needed to create more or less tempered areas and the degree of translucency of the façade that helps to prevent or allow sunlight which helps in its turn to reach the preferred temperature. The facade is not any more the envelope of the building, it is the layers that are responsible for providing the comfort inside the building.
Indeed, thinking about architecture through architecture’s interiors can expose low-tech computation that starts from a thermodynamic discreteness. This enables the understanding of spatial sequence that can support different levels of space in a building and the notion of layers of building-in-buildings. If this concept is upscaled to the scale of the city it could be an opportunity to study the kinds of patterns that mereology can create utilising environmental thinking. This means that a building, or even a city, could become an example of the campfire that Banham aimed to reach many years ago.
 M. Heidegger, The End of Philosophy, trans. Joan Stambaugh (Cambridge University Press, 2003).
 M. Carpo, The Second Digital Turn: Design Beyond Intelligence, (Cambridge, Massachusetts: MIT Press, 2017).
 L. Parisi, “Reprogramming Decisionism,” e-flux, 85 (2017).
 Le Corbusier, “Eyes That Do Not See” in Towards a New Architecture, (London: The Architectural Press, 1927), 107.
 M. Carpo, “Excessive Resolution: Artificial Intelligence and Machine Learning in Architectural Design,” Journal of Architectural Record (2018), https://www.architecturalrecord.com/articles/13465-excessive-resolution-artificial-intelligence-and-machine-learning-in-architectural-design, last accessed 3 May 2019.
 R. Banham, “Machines A habiter”, The Architecture of the Well-tempered Environment, (Chicago: Chicago Press, 1969).
 A. Varzi, “Mereology Then and Now”, Journal of Logic and Logical Philosophy, 24 (2015), 409-427.
 S. Suleiman, “Direct comfort ventilation: Wisdom of the past and technology of the future (wind-catcher),” Journal of Sustainable Cities and Society, 5, 1 (2012 ), 8-15.
 M. de Jong, “Air Circulation in Data Centres: rethinking your design” , Data Centre Knowledge, (2014), http://www.datacenterknowledge.com/archives/2014/11/27/air-circulation-in-data-centers-rethinking-your-design, last accessed 5 May 2019.
 M. P. Vitruvius, “First Principles and The Layout of Cities,” Ten Books on Architecture, ed. Ingrid D. Rowland (Cambridge: Cambridge University Press, 1999), 21-32.
 R. Banham, “The kit of parts: heat and light,” The Architecture of the Well-tempered Environment (Chicago: Chicago Press, 1969).
The design research presented here aims to develop a design methodology that can compute an architecture that participates within the new digital economy. As technology advances, the world needs to quickly adapt to each new advancement. Since the turn of the last century, technology has integrated itself within our everyday lives and deeply impacted the way in which we live. This relationship has been defined by TM Tsai et al. as “Online to Offline” or “O2O” for short. What O2O means is defining virtually while executing physically, such as platform-based companies like Uber, AirBnb, and Groupon do. O2O allows for impact or disruption of the physical world to be made within the digital world. This has significantly affected economies around the world.
Paul Mason outlined in Post Capitalism: A Guide to our Future (2015) that developments in technology and the rise of the internet have created a decline in capitalism, which is being replaced by a new socio-economic system called “Post Capitalism”. As Mason describes,“technologies we’ve created are not compatible with capitalism […] once capitalism can no longer adapt to technological change”. Traditional capitalism is being replaced by the digital economy, changing the way products are produced, sold and purchased. There is a new type of good which can be bought or sold: the digital product. Digital products can be copied, downloaded and moved an infinite number of times. Mason states that it is almost impossible to produce a digital product through a capitalist economy due to the nature of the digital product. An example he uses is a program or software that can be changed throughout time and copied with little to no cost. The original producer of the product cannot regain their cost as one can with a physical good, leading to traditional manufacturers losing income from digital products. With the increase in digital products, the economy must be adapted.
In The Second Digital Turn (2017) Mario Carpo describes this phenomenon, stating that digital technologies are creating a new economy where production and transactions are done entirely algorithmically, and as a result are no longer time-consuming, labour intensive or costly. This leads to an economy which is constantly changing and adapting to the current status of the context in which it is in. Carpo describes the benefits of the digital economy as the following: “[…] it would appear that digital tools may help us to recreate some degree of the organic, spontaneous adaptivity that allowed traditional societies to function, albeit messily by our standards, before the rise of modern task specialisation.”
It is useful to look at the work of Kurt Gödel and his theorems for mathematical logic, which are the basis for computational logic. In his first theorem the term “axioms” is presented, which are true statements that can be proven as true. The theorem states that “If axioms do not contradict each other and are ‘listable’ some statements are true but cannot be proved.” This means that any system based on mathematical statements, axioms, cannot prove everything unless additional axioms are added to the list. From this Gödel describes his second theorem, “A system of axioms cannot show its inconsistency.” To relate this to programming, axioms can be seen as similar to code, yet everything cannot be proven from a single system of code.
Allen Turing’s work on computable numbers is a result of these two theorems by Gödel. Turing was designing a rigorous notion of effective computability based on the “Turing Machine”. The Turing Machine was to process any given information based on a set of rules, or a programme the machine follows, provided by the user for a specified intention. The machine is fed with an infinitely long tape, divided into squares, which contains a sequence of information. The machine would “scan” a symbol, “read” the given rules, “write” an output symbol, and then move to the next symbol. As Turning described, the “read” process refers back to the rule set provided: the machine would look through the rules, find the scanned symbol, then proceed to follow the instructions of the scanned symbol. The machine then writes a new symbol and moves to a new location, repeating the process over and over until it is told to by the ruleset to halt or stop the procedure and deliver an output. Turing’s theories laid down the foundation for the idea of a programmable machine able to interpret given information based on a given programme.
When applying computational thinking to architecture, it becomes evident that a problem based in the physical requires a type of physical computation. By examining the work of John von Neumann in comparison with Lionel Sharples Penrose the difference between the idea of a physical computational machine and a traditional automata computation can be explored. In Arthur W. Burks’s essay ‘Von Neumann’s Self-Reproducing Automata’ (1969) he describes von Neumann’s idea of automata, or the way in which computers think and the logic to how they process data. Von Neumann developed simple computer automata that functioned on simple switches of “and”, “or”, and “not”, in order to explore how automata can be created that are similar to natural automata, like cells and a cellular nervous system, making the process highly organic and with it the ability to compute using physical elements and physical data. Von Neumann theorised of a kinetic computational machine that would contain more elements than the standard automata, functioning in a simulated environment. As Burks describes, the elements are “floating on the surface, […] moving back and forth in random motion, after the manner of molecules of a gas.” As Burks states, von Neumann utilised this for “the control, organisational, programming, and logical aspects of both man-made automata […] and natural systems.”
However this poses issues around difficulty of control, as the set of rules are simple but incomplete. To address this von Neumann experimented with the idea of cellular automata. Within cellular automata he constructs a series of grids that act as a framework for events to take place, or a finite list of states in which the cell can be. Each cell’s state has a relation to its neighbours. As states change in each cell, this affects the states of each cell’s neighbour. This form of automata constructs itself entirely on a gridded and highly strict logical system.
Von Neumann’s concept for kinetic computation was modelled on experiments done by Lionel Sharples Penrose in 1957. Penrose experimented with the intention of understanding how DNA and cells self-replicate. He built physical machines that connected using hooks, slots and notches. Once connected the machines would act as a single entity, moving together forming more connections and creating a larger whole. Penrose experimented with multiple types of designs for these machines. He began with creating a single shape from wood, with notches at both ends and an angled base, allowing the object to rock on each side. He placed these objects along a rail, and by moving the rail forwards and backwards the objects interacted, and, at certain moments, connected. He designed another object with two identical hooks facing in opposite directions on a hinge. As one object would move into another, the hook would move up and interlock with a notch in the other element. This also allowed for the objects to be separated. If three of these objects were joined, and a fourth interlocked at the end, the objects would split into two equal parts. This enabled Penrose to create a machine which would self-assemble, then when it was too large, it would divide, replicating the behaviours of cellular mitosis. These early physical computing machines would operate entirely on kinetic behaviour, encoding behaviours within the design of the machine itself, transmitting data physically.
Experimenting with Penrose: Physical Computation
The images included here are of design research into taking Penrose objects into a physics engine and testing them at a larger scale. By modifying the elements to work within multiple dimensions, certain patterns and groupings can be achieved which were not accessible to Penrose. Small changes to an element, as well as other elements in the field, affect each other in terms of how they connect and form different types of clusters.
In Figure X, there is a spiralling hook. Within the simulations the element can grow in size, occupying more area. It is also given a positive or negative rotation. The size of the growth represents larger architectural elements, and thus takes more of the given space within the field. This leads to a higher density of elements clustering. The rotation of the spin provides control over what particular elements will hook together. Positive and positive rotations will hook, as well as negative and negative ones, but opposite spins will repeal each other as they spin.
Through testing different scenarios, formations begin to emerge, continuously adapting as each object is moving. At a larger scale, how the elements will interact with each other can be planned for spatially. In larger simulations certain groupings can be combined together to create larger formations of elements connected through strings of hooked elements. This experimentation leads towards a new form of architecture referred to as “codividual architecture”, or a computable architectural space created using the interaction and continuous adaptation of spatial elements. The computation of space occurs when individual spaces fuse together, therefore becoming one new space indistinguishable from the original parts. This process continues, allowing codividual architecture of constant change and adaptability.
Codividual spaces can be further supported by utilising machine learning, which computes parts at the moment they fuse with other parts, the connection of spaces, the spaces that change, and how parts act as a single element once fused together. This leads to almost scaleless spatial types of infinite variations. Architectural elements move in a given field and through encoded functions – connect, move, change and fuse. In contrast to what von Neumann was proposing, where the elements move randomly similar to gaseous molecules, these elements can move and join based on an encoded set of rules.
Within this type of system that merges together principles of von Neumann’s automata with codividuality, traditional automata and state machines can be radically rethought by giving architectural elements the capacity for decision making by using machine learning. The elements follow a set of given instructions but also have additional knowledge allowing them to assess the environment in which they are placed. Early experiments, shown here in images of the thesis project COMATA, consisted of orthogonal elements that varied in scale, creating larger programmatic spaces that were designed to create overlaps, and interlock, with the movement of the element. The design allowed for the elements to create a higher density of clustering when they would interlock in comparison to a linear, end-to-end connection.
This approach offers a design methodology which takes into consideration not only the internal programme, structure and navigation of elements, but the environmental factors of where they are placed. Scale is undefined and unbounded: each part can be added to create new parts, with each new part created as the scale grows. Systems adapt to the contexts in they are placed, creating a continuous changing of space, allowing for an understanding of the digital economics of space in real time.
 T. M. Tsai, P. C. Yang, W. N. Wang, “Pilot Study toward Realizing Social Effect in O2O Commerce Services,” eds. Jatowt A. et al., Social Informatics, 8238 (2013).
 P. Mason, Postcapitalism: A Guide to Our Future, (Penguin Books, 2016), xiii.
 Ibid, 163.
 M. Carpo, The Second Digital Turn: Design Beyond Intelligence (Cambridge, Massachusetts: MIT Press, 2017), 154.
 P. Millican, Hilbert, Gödel, and Turing [Online] (2019), http://www.philocomp.net/computing/hilbert.htm, last accessed May 2 2019.
 A. Turing, “On Computable Numbers, with an Application to the Entscheidungsproblem,” Proceedings of the London Mathematical Society, 1, 2-42, (1937), 231-232.
 A. W. Burks, Von Neumann's Self-Reproducing Automata; Technical Report (Ann Arbor: The University of Michigan, 1969), 1.
 A. W. Burks, Essay on Cellular Automata, Technical Report (Urbana: The University of Illinois Press, 1970), 5.
 A. W. Burks, Essay on Cellular Automata, Technical Report (Urbana: The University of Illinois Press, 1970), 7-8.
 L. S. Penrose, “Self-Reproducing Machines,” Scientific American, 200 (1959), 105-114.
Parts, chunks, stacks and aggregates are the bits of computational architecture today. Why do mereologies – or buildings designed from part-to-whole – matter? All too classical, the roughness of parts seems nostalgic for a project of the digital that aims for dissolving building parts towards a virtual whole. Yet if parts shrink down to computable particles and matter, and there exists a hyper-resolution of a close to an infinite number of building parts, architecture would dissolve its boundaries and the capacity to frame social encounters. Within fluidity, and without the capacity to separate, architecture would not be an instrument of control. Ultimately, freed from matter, the virtual would transcend from the real and form finally would be dead. Therein is the prospect of a fluid, virtual whole.
The Claustrophobia of a City that Transcends its Architecture
In the acceleration from Data to Big Data, cities have become more and more virtual. Massive databases have liquefied urban form. Virtual communication today plays freely across the material boundaries of our cities. In its most rudimentary form virtuality is within the digital transactions of numbers, interests and rents. Until a few years ago, financial investments in architectural form were equatable according to size and audience, e.g. as owner-occupied flats, as privately rented houses or as lease holding. Today capital flows freely scatter across the city at the scale of the single luxury apartment. Beyond a certain threshold in computational access, data becomes big. By computing aggregated phone signal patterns or geotagged posts, virtual cities can emerge from the traces of individuals. These hyperlocal patterns are more representative of a city than its physical twin. Until recently, architecture staged the urban through shared physical forms: the sidewalk, lane or boulevard. Adjacent to cars, walkable for pedestrians or together as citizens, each form of being urban included an ideology of a commons, and grounded with that particular parts of encountering.
In contrast, a hyper-local urban transcends lanes and sidewalks. Detached from the architecture of the city, with no belonging left, urban speculation has withdrawn into the private sphere. Today, urban value is estimated by counting private belongings only, with claustrophobic consequences. An apartment that is speculatively invested displaces residents. The housing shortage in the big cities today is not so much a problem of lack of housing, but instead of vacant space, accessible not to residents but to interests they hold in the hyper-urban. The profit from rent and use of space itself is marginal compared to the profit an embodied urban speculation adds to the property. The possibility of mapping every single home as data not only adds interest, like a pension to a home but literally turns a home into a pension. However this is not for its residents but for those with access to resources. Currently, computing Big Data expands and optimises stakeholders’ portfolios by identifying undervalued building assets. However, the notion of ‘undervalued’ is not an accurate representation of assets.
Hyper-localities increase real estate’s value in terms of how their inhabitants thrive in a neighbourhood through their encounters with one another and their surrounding architecture. The residents themselves then unknowingly produce extra value. The undervaluing of an asset is the product of its residents, and like housework, is unpaid labour. In terms of the exchange of capital, additional revenue from a property is usually paid out as a return to the shareholders who invested in its value. Putting big data-driven real estate into that equation would then mean that they would have to pay revenues to their residents. If properties create surplus value from the data generated by their residents, then property without its residents has less worth and is indeed over-, but not under-, valued.
The city uses vehicles for creating public revenue by governing the width of a street’s section or the height of a building. Architecture’s role was to provide a stage for that revenue to be created. For example the Seagram Building (van der Rohe, Johnson, 1958) created a “public” plaza by setting back its envelope in exchange for a little extra height. By limiting form, architecture could create space for not only one voice, but many voices. Today, however, the city’s new parameters hidden in the fluidity of digital traces cannot be governed by the boundaries of architecture anymore. Outlined already 40 years ago, when the personal computer became available, Gilles Deleuze forecasted that “Man is not anymore man enclosed”. At that time, and written as a “Postscript on the Societies of Control”, the fluid modulation of space prospected a desirable proposition. By liquefying enclosures, the framework of the disciplinary societies of Foucault’s writings would disappear. In modern industrial societies, Deleuze writes, enclosures were moulds for casting distinct environments, and in these vessels, individuals became masses of the mass society. For example, inside a factory, individuals were cast as workers, inside schools as students. Man without a cast and without an enclosure seemed to be freed from class and struggle. The freedom of an individual was interlinked with their transcendence from physical enclosures.
During the last forty years, the relation between a single individual and the interior framed architecture rightly aimed to dissolve the institutional forms of enclosures that represented social exclusion at their exterior. Yet, in this ambition alternative forms for the plural condition of what it means to be part of a city were not developed. Reading Deleuze further, a state without enclosures also does not put an end to history. The enclosures of control dissolve only to be replaced. Capitalism would shift to another mode of production. When industrial exchange bought raw materials and sold finished products, now it would buy the finished products and profit from the assemblies of those parts. The enclosure is then exchanged with codes that mark access to information. Individuals would not be moulded into masses but considered as individuals: accessed as data, divided into proper parts for markets, “counted by a computer that tracks each person’s position enabling universal modulation.” Forty years in, Deleuze’s postscript has become the screenplay for today’s reality.
Hyper-parts: Spatial Practices of representations
A house is no longer just a neutral space, an enclosing interior where value is created, realised and shared. A home is the product of social labour; it is itself the object of production and, consequently, the creation of surplus value. By shifting from enclosure to asset, the big data-driven economy has also replaced the project behind modernism: humanism. Architecture today is post-human. As Rosi Braidotti writes, “what constitutes capital value today is the informational power of living matter itself”. The human being as a whole is displaced from the centre of architecture. Only parts of it, such as its “immanent capacities to form surplus-value”, are parts of a larger aggregation of architecture. Beyond the human, the Hyper-city transcends the humane. A virtual city is freed from its institutions and constituent forms of governance. Economists such as Thomas Piketty describe in painstaking detail how data-driven financial flows undermine common processes of governance, whether urban, regional, or national, in both speed and scale. Their analysis shows that property transactions shelled in virtual value-creation-bonds are opaque to taxation. Transcending regulatory forms of governance, one can observe the increase of inequalities on a global scale. Comparable to the extreme wealth accumulation at the end of the nineteenth century, Piketty identifies similar neo-proprietarian conditions today, seeing the economy shifting into a new state he coins as “hypercapitalism”. From Timothy Morton’s “hyper-objects” to hypercapitalism, hyper replaces the Kantian notion of transcendence. It expresses not the absorption of objects into humanism, but its withdrawal. In contrast to transcendence, which subordinates things to man’s will, the hyper accentuates the despair of the partial worlds of parts – in the case of Morton in a given object and in the case of Piketty in a constructed ecology.
When a fully automated architecture emerged, objects oriented towards themselves, and non-human programs began to refuse the organs of the human body. Just as the proportions of a data center are no longer walkable, the human eye can no longer look out of a plus-energy window, because it tempers the house, but not its user. These moments are hyper-parts: when objects no longer transcend into the virtual but despair in physical space. More and more, with increasing computational performance, following the acronym O2O (from online to offline), virtual value machines articulate physical space. Hyper-parts place spatial requirements. A prominent example is Katerra, the unicorn start-up promising to take over building construction using full automation. In its first year of running factories, Katerra advertises that it will build 125,000 mid-rise units in the United States alone. If this occurred, Katerra would take around 30% of the mid-rise construction market in the company’s local area. Yet its building platform consists of only twelve apartment types. Katerra may see the physical homogeneity as an enormous advantage as it increases the sustainability of its projects. This choice facilitates financial speculation, as the repetition of similar flats reduces the number of factors in the valuing of apartments and allows quicker monetary exchange, freed from many variables. Sustainability refers not to any materiality but to the predictability of its investments. Variability is still desired, but oriented towards finance and not to inhabitants. Beyond the financialisation of a home, digital value machines create their own realities purely through the practice of virtual operations.
Here one encounters a new type of spatial production: the spatial practice of representations. At the beginning of what was referred to as “late capitalism”, the sociologist and philosopher Henri Lefebvre proposed three spatialities which described modes of exchange through capitalism. The first mode, a spatial practice referred to a premodern condition, which by the use of analogies interlinked objects without any forms of representation—the second, representations of space linked directly to production, the organic schemes of modernism. The third representational spaces express the conscious trade with representations, the politics of postmodernism, and their interest in virtual ideas above the pure value of production. Though not limited to three only, Lefebvre’s intention was to describe capitalism as “an indefinite multitude of spaces, each one piled upon, or perhaps contained within, the next”. Lefebvre differentiated the stages in terms of their spatial abstraction. Incrementally, virtual practices transcended from real-to-real to virtual-to-real to virtual-to-virtual. But today, decoupled from the real, a virtual economy computes physically within spatial practices of representations. Closing the loop, the real-virtual-real, or new hyper-parts, do not subordinate the physical into a virtual representation, instead, the virtual representation itself acts in physical space.
This reverses the intention of modernism orientated towards an organic architecture by representing the organic relationships of nature in geometric thought. The organicism of today’s hypercomputation projects geometric axioms at an organic resolution. What was once a representation and a geometry distant from human activity, now controls the preservation of financial predictability.
The Inequalities Between the Parts of the Virtual and the Parts of the Real
Beyond the human body, this new spatial practice of virtual parts today transcends the digital project that was limited to a sensorial interaction of space. This earlier understanding of the digital project reduced human activity to organic reflexes only, thus depriving architecture of the possibility of higher forms of reflection, thought and criticism. Often argued through links to phenomenology and Gestalt theory, the simplification of architectural form to sensual perception has little to do with phenomenology itself. Edmund Husserl, arguably the first phenomenologist, begins his work with considering the perception of objects, not as an end, but to examine the modes of human thinking. Examining the logical investigations, Husserl shows that thought can build a relation to an object only after having classified it, and therefore, partitioned it. By observing an object before considering its meaning, one classifies an object, which means identifying it as a whole. Closer observations recursively partition objects into more unaffected parts, which again can be classified as different wholes. Husserl places parts before both thought and meaning.
Derived from aesthetic observations, Husserl’s mereology was the basisof his ethics, and was therefore concluded in societal conceptions. In his later work, Husserl’s analysis is an early critique of the modern sciences. For Husserl, in their efforts to grasp the world objectively, the sciences have lost their role in enquiring into the meaning of life. In a double tragedy, the sciences also alienated human beings from the world. Husserl thus urged the sciences to recall that they ground their origins in the human condition, as for Husserl humanism was ultimately trapped in distancing itself further from reality.
One hundred years later, Husserl’s projections resonate in “speculative realism”. Coined By Levi Bryant as “strange mereology”, objects, their belongings, and inclusions are increasingly strange to us. The term “strange” stages the surprise that one is only left with speculative access. However, ten years in, speculation is not distant anymore. That which transcends does not only lurk in the physical realm. Hyper-parts figurate ordinary scales today, namely housing, and by this transcend the human(e) occupation.
Virtual and physical space are compositionally comparable. They both consist of the same number of parts, yet they do not. If physical elements belong to a whole, then they are also part of that to which their whole belongs. In less abstract terms, if a room is part of an apartment, the room is also part of the building to which the apartment belongs. Materially bound part relationships are always transitive, hierarchically nested within each other. In virtual space and the mathematical models with which computers are structured today, elements can be included within several independent entities. A room can be part of an apartment, but it can also be part of a rental contract for an embassy. A room is then also part of a house in the country in which the house is located. But as part of an embassy, the room is at the same time part of a geographically different country on an entirely different continent than the building that houses the embassy. Thus, for example, Julian Assange, rather than boarding a plane, only needed to enter a door on a street in London to land in Ecuador. Just with a little set theory, in the virtual space of law, one can override the theory of relativity with ease.
Parts are not equal. Physical parts belong to their physical wholes, whereas virtual parts can be included in physical parts but don’t necessarily belong to their wholes. Far more parts can be included in a virtual whole than parts that can belong to a real whole. When the philosopher Timothy Morton says “the whole is always less than the sum of its parts”, he reflects the cultural awareness that reality breaks due to asymmetries between the virtual and the real. A science that sets out to imitate the world is constructing its own. The distance which Husserl spoke of is not a relative distance between a strange object and its observer, but a mereological distance, when two wholes distance each other because they consist of different parts. In its effort to reconstruct the world in ever higher resolution, modernism, and in its extension the digital project, has overlooked the issue that the relationship between the virtual and the real is not a dialogue. In a play of dialectics between thought and built environment, modernism understood design as a dialogue. In extending modern thought, the digital project has sought to fulfill the promise of performance, that a safe future could be calculated and pre-simulated in a parallel, parametric space. Parametricism, and more generally what is understood as digital architecture, stands not only for algorithms, bits, and rams but for the far more fundamental belief that in a virtual space, one can rebuild reality. However, with each resolution that science seeks to mimic the world, the more parts it adds to it.
The Poiesis of a Virtual Whole
The asymmetry between physical and virtual parts is rooted in Western classicism. In early classical sciences, Aristotle divided thinking into the trinity of practical action, observational theory and designing poiesis. Since the division in Aristotle’s Nicomachean Ethics, design is a part of thought and not part of objects. Design is thus a knowledge, literally something that must first be thought. Extending this contradiction to the real object, design is not even concerned with practice, with the actions of making or using, but with the metalogic of these actions, the in-between between the actions themselves, or the art of dividing an object into a chain of steps with which it can be created. In this definition, design does not mean to anticipate activities through the properties of an object (function), nor to observe its properties (materiality), but through the art of partitioning, structuring and organising an object in such a way that it can be manufactured, reproduced and traded.
To illustrate poiesis, Aristotle made use of architecture. No other discipline exposes the poetic gap so greatly between theory, activity and making. Architecture first deals with the coordination of the construction of buildings. As the architecture historian Mario Carpo outlines in detail, revived interest in classicism and the humanistic discourse on architecture began in the Renaissance with Alberti’s treatise: a manual that defines built space, and ideas about it solely through word. Once thought and coded into words, the alphabet enabled the architect to physically distance from the building site and the built object. Architecture as a discipline then does not start with buildings, but with the first instructions written by architects used to delegate the building.
A building is then anticipated by a virtual whole that enables one to subordinate its parts. This is what we usually refer to as architecture: a set of ideas that preempt the buildings they comprehend. The role of the architect is to imagine a virtual whole drawn as a diagram, sketch, structure, model or any kind of representation that connotates the axes of symmetries and transformations necessary to derive a sufficient number of parts from it. Architectural skill is then valued by the coherence between the virtual and the real, the whole and its parts, the intention and the executed building. Today’s discourse on architecture is the surplus of an idea. You might call it the autopoiesis of architecture – or merely a virtual reality. Discourse on architecture is a commentary on the real.
From the very outset, architecture distanced itself from the building, yet also aimed to represent reality. Virtual codes were never autonomous from instruments of production. The alphabet and the technology of the printing press allowed Alberti to describe a whole ensemble distinct from a real building. Coded in writing, printing allowed for the theoretically infinite copies of an original design. Over time, the matrices of letters became the moulds of the modern production lines. However, as Mario Carpo points out, the principle remained the same. Any medium that incorporates and duplicates an original idea is more architecture than the built environment itself. Belonging to a mould, innovation in architecture research could be valued in two ways. Quantitatively, in its capacity to partition a building in increasing resolution. Qualitatively, in its capacity to represent a variety of contents with the same form. By this, architecture faced the dilemma that one would have to design a reproducible standard that could partition as many different forms as possible to build non-standard figurations.
The dilemma of the non-standard standard moulds is found in Sebastiano Serlio’s transcription of Alberti’s codes into drawings. In the first book of his treatise, Serlio introduces a descriptive geometry to reproduce any contour and shape of a given object through a sequence of rectangles. For Serlio, the skill of the architect is to simplify the given world of shapes further until rectangles become squares. The reduction finally enables the representation of physical reality in architectural space using an additive assembly of either empty or full cubes. By building a parallel space of cubes, architecture can be partitioned into a reproducible code. In Serlio’s case, architecture could be coded through a set of proportional ratios. However, from that moment on, stairs do not consist only of steps, and have to be built with invisible squares and cubes too.
Today, Serlio’s architectural cubes are rendered obsolete by 3D printed sand. By shrinking parts to the size of a particle of dust, any imaginable shape can be approximated by adding one kind of part only. 3D printing offers a non-standard standard, and with this, five hundred years of architectural development comes to an end.
Replicating: A Spatial Practice of Representations
3D printing dissolved existing partitioning parts to particles and dust. A 3D-printer can not only print any shape but can also print at any place, at any time. The development of 3D printing was mainly driven by DIY hobbyists in the Open Source area. One of the pioneering projects here is the RepRap project, initiated by Adrian Bowyer. RepRap is short for replicating rapid prototyping machine. The idea behind it is that if you can print any kind of objects, you can also print the parts of the machine itself. This breaks with the production methods of the Modern Age. Since the Renaissance, designers have crafted originals and used these to build a mould from those so that they can print as many copies as possible. This also explains the economic valuation of the original and why authorship is so vehemently protected in legal terms. Since Alberti’s renunciation of drawings for a more accurate production of his original idea through textual encoding, the value of an architectural work consisted primarily in the coherence of a representation with a building: a play of virtual and real. Consequently, an original representation that cast a building was more valued than its physical presentation. Architecture design was oriented to reduce the amount of information needed to cast. This top-down compositional thinking of original and copy becomes obsolete with the idea of replication.
Since the invention of the printing press, the framework of how things are produced has not changed significantly. However, with a book press, you can press a book, but with a book, you can’t press a book. Yet with a 3D printer, you can print a printer. A 3D printer does not print copies of an original, not even in endless variations, but replicates objects. The produced objects are not duplicates because they are not imprints that would be of lower quality. Printed objects are replicas, objects with the same, similar, or even additional characteristics as their replicator.
A 3D printer is a groundbreaking digital object because it manifests the foundational principle of the digital – replication – on the scale of architecture. The autonomy of the digital is based not only on the difference between 0 and 1 but on the differences in their sequencing. In mathematics in the 1930s, the modernist project of a formal mimicry of reality collapsed through Godel’s proof of the necessary incompleteness of all formal systems. Mathematicians then understood that perhaps far more precious knowledge could be gained if we could only learn to distance ourselves from its production. The circle of scientists around John von Neumann, who developed the basis of today’s computation, departed from one of the smallest capabilities in biology: to reproduce. Bits, as a concatenation of simple building blocks and the integrated possibility of replication, made it possible, just by sequencing links, to build first logical operations, and connecting those programs to today’s artificial networks. Artificial intelligence is artificial but it is also alive intelligence.
To this day, computerialisation, not computation is at work in architecture. By pursuing the modern project of reconstructing the world as completely as possible, the digital project computerised a projective cast in high resolution. Yet this was done without transferring the fundamental principles of interlinking and replication to the dimensions of the built space.
From Partitioning to Partaking
The printing press depends on a mould to duplicate objects. The original mould was far more expensive to manufacture than its copies, so the casting of objects had to bundle available resources. This required high investments in order to start production, leading to an increasing centralisation of resources in order to scale the mass-fabrication of standard objects for production on an assembly line. Contrarily, digital objects do not need a mould. Self-replication provided by 3D printing means that resources do not have to be centralised. In this, digital production shifts to distributed manufacturing.
Independent from any mould, digital objects as programs reproduce themselves seamlessly at zero marginal costs. As computation progresses, a copy will then have less and less value. Books, music and films fill fewer and fewer shelves because it no longer has value to own a copy when they are ubiquitously available online. And the internet does not copy; it links. Although not fully yet integrated into its current TCP-IP protocol, the basic premise of hyperlinking is that linked data adds value. Links refer to new content, further readings, etc. With a close to infinite possibility to self-reproduce, the number of objects that can be delegated and repeated becomes meaningless. What then counts is hyper-, is the difference in kind between data, programs and, eventually, building parts. In his identification of the formal foundations of computation, the mathematician Nelson Goodman pointed out that beyond a specific performance of computation, difference, and thus value, can only be generated when a new part is added to the fusion of parts. What is essential for machine intelligence is the dimensionality of its models, e.g., the number of its parts. Big data refers less to the amount of data, but more to the number of dimensions of data.
With increasing computation, architecture shifted from an aesthetic of smoothness that celebrated the mastership of an infinite number of building parts to roughness. Roughness demands to be thought (brute). The architecture historian Mario Carpo is right to frame this as nostalgic, as “digital brutalism”. Similar to brutalism that wanted to stimulate thought, digital roughness aims to extend spatial computability, the capability to extend thinking, and the architecture of a computational hyper-dimensionality. Automated intelligent machines can accomplish singular goals but are alien to common reasoning. Limited around a ratio of a reality, a dimension, a filter, or a perspective, machines obtain partial realities only. Taking them whole excludes those who are not yet included and that which can’t be divided: it is the absolute of being human(e).
A whole economy evolved from the partial particularity of automated assets ahead of the architectural discipline. It would be a mistake to understand the ‘sharing’ of the sharing economy as having something “in common”. On the contrary, computational “sharing” does not partition a common use, but enables access to multiple, complementary value systems in parallel.
Cities now behave more and more like computers. Buildings are increasingly automated. They use fewer materials and can be built in a shorter time, at lower costs. More buildings are being built than ever before, but fewer people can afford to live in them. The current housing crisis has unveiled that buildings no longer necessarily need to house humans or objects. Smart homes can optimise material, airflow, temperature or profit, but they are blind to the trivial.
It is a mistake to compute buildings as though they are repositories or enclosures, no matter how fine-grain their resolution is. The value of a building is no longer derived only from the amount of rent for a slot of space, but from its capacities to partake with. By this, the core function of a building changes from inhabitation to participation. Buildings do not anymore frame and contain: they bind, blend, bond, brace, catch, chain, chunk, clamp, clasp, cleave, clench, clinch, clutch, cohere, combine, compose, connect, embrace, fasten, federate, fix, flap, fuse, glue, grip, gum, handle, hold, hook, hug, integrate, interlace, interlock, intermingle, interweave, involve, jam, join, keep, kink, lap, lock, mat, merge, mesh, mingle, overlay, palm, perplex, shingle, stick, stitch, tangle, tie, unit, weld, wield, and wring.
In daily practice, BIM models do not highlight resolution but linkages, integration and collaboration. With further computation, distributed manufacturing, automated design, smart contracts and distributed ledgers, building parts will literally compute the Internet of Things and eventually our built environment, peer-to-peer, or better, part-to-part – via the distributive relationships between their parts. For the Internet of Things, what else should be its hubs besides buildings? Part-to-part habitats can shape values through an ecology of linkages, through a forest of participatory capacities. So, what if we can participate in the capacities of a house? What if we no longer have to place every brick, if we no longer have to delegate structures, but rather let parts follow their paths and take their own decisions, and let them participate amongst us together in architecture?
 S. Kostof, The City Assembled: The Elements of Urban Form Through History (Boston: Little, Brown and Company, 1992).
 J. Aspen, "Oslo – the triumph of zombie urbanism." Edward Robbins, ed., Shaping the city, (New York: Routledge, 2004).
 The World Bank actively promotes housing as an investment opportunity for pension funds, see: The World Bank Group, Housing finance: Investment opportunities for pension funds (Washington: The World Bank Group, 2018).
 G. M. Asaftei, S. Doshi, J. Means, S. Aditya, “Getting ahead of the market: How big data is transforming real estate”, McKinsey and Company (2018).
 G. Deleuze, “Postscript on the societies of control,” October, 59: 3–7 (1992), 6.
 Ibid, 4.
 Ibid, 6.
 R. Braidotti, Posthuman Knowledge (Medford, Mass: Polity, 2019).
 T. Piketty, Capital and Ideology (Cambridge, Mass: Harvard University Press, 2020).
 A. McAfee, E. Brynjolfsson, Machine, platform, crowd: Harnessing our digital future (New York: W.W. Norton & Company, 2017).
 H. Lefebvre, The Production of Space (Oxford: Basil Blackwell, 1991), 33.
 Ibid, 8.
 E. Husserl, Logische Untersuchungen: Zweiter Teil Untersuchungen zur Phänomenologie und Theorie der Erkenntnis.trans. "Logical investigations: Part Two Investigations into the phenomenology and theory of knowledge" (Halle an der Saale: Max Niemeyer, 1901).
 E. Husserl, Cartesianische Meditationen und Pariser Vortraege. trans. "Cartesian meditations and Parisian lectures" (Haag: Martinus Nijhoff, Husserliana edition, 1950).
 L. Bryant, The Democracy of Objects (Ann Arbor: University of Michigan Library, 2011).
 T. Morton, Being Ecological (London: Penguin Books Limited, 2018), 93.
 Aristotle, Nicomachean Ethics 14, 1139 a 5-10.
 M. Carpo, Architecture in the Age of Printing (Cambridge, Mass: MIT Press, 2001).
 M. Carpo, The Alphabet and the Algorithm (Cambridge, Mass: MIT Press, 2011).
 F. Migayrou, Architectures non standard (Editions du Centre Pompidou, Paris, 2003).
 S. Serlio, V. Hart, P. Hicks, Sebastiano Serlio on architecture (New Haven and London: Yale University Press, 1996).
 R. Jones, P. Haufe, E. Sells, I. Pejman, O. Vik, C. Palmer, A. Bowyer, “RepRap – the Replicating Rapid Prototyper,” Robotica 29, 1 (2011), 177–91.
 A. W. Burks, Von Neumann's self-reproducing automata: Technical Report (Ann Arbor: The University of Michigan, 1969).
 R. Evans, The Projective Cast: Architecture and Its Three Geometries (Cambridge, Massachusetts: MIT Press, 1995).
 N. Gershenfeld, “How to make almost anything: The digital fabrication revolution,” Foreign Affairs, 91 (2012), 43–57.
 J. Rifkin. The Zero Marginal Cost Society: The Internet of Things, the Collaborative Commons, and the Eclipse of Capitalism (New York: Palgrave Macmillan, 2014).
 B. Bratton, The Stack: On Software and Sovereignty (Cambridge, Massachusetts: MIT Press, 2016).
 J. Lanier, Who Owns the Future? (New York: Simon and Schuster, 2013).
 N. Goodman, H. S. Leonard, “The calculus of individuals and its uses,” The Journal of Symbolic Logic, 5, 2 (1940), 45–55.
 P. Domingos, The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World (London: Penguin Books, 2015).
 M. Carpo, “Rise of the Machines,” Artforum, 3 (2020).
“…the rigour of the architecture is concealed beneath the cunning arrangement of the disordered violences…”
This essay investigates the potential of codividual sympoiesis as a mode of thinking overlapping ecological concepts with economics, contemporary philosophy, advanced research in computation and digital architecture. By extending Donna Haraway’s argument of “tentacular thinking” into architecture, it lays emphasis on a self-organising and sympoietic approach to architecture. Shifting focus from an object-oriented thinking to parts, it uses mereology, the study of part-hoods and compositions, as a methodology to understand a building as being composed of parts.
It argues the limits of autopoiesis as a system and conceptualises a new architectural computing system embracing spatial codividuality and sympoiesis as a necessity for an adaptive and networked existence through continued complex interactions among its components. It propagates codividual sympoiesis as a model for continuous discrete computation and automata, relevant in the present times of distributed and shared economies.
A notion of fusing parts is established to scale up the concept and to analyse the assemblages created over a steady sympoietic computational process, guided by mereology and the discrete model. It gives rise to new conceptions of space, with a multitude of situations offered by the system at any given instant. These sympoietic inter-relations between the parts can be used to steadily produce new relations and spatial knottings, going beyond the most limiting aspect of autopoiesis, enabling it to begin to produce similar patterns of relations.
This essay extends the conceptual idea of tentacular thinking, propagated by Donna Haraway, into architecture. Tentacular thinking, as Haraway explains, is an ecological concept which is a metaphorical explanation for a nonlinear, multiple, networked existence. It elaborates on a biological idea that “we are not singular beings, but limbs in a complex, multi-species network of entwined ways of existing.” Haraway, being an ecological thinker, leads this notion of tentacular thinking to the idea of poiesis, which means the process of growth or creation and brings into discussion several ecological organisational concepts based on self-organisation and collective organisation, namely autopoiesis and sympoiesis. It propagates the notion that architecture can evolve and change within itself, be more sympoietic rather than autopoietic, and more connected and intertwined.
With the advent of distributed and participatory technologies, tentacularity offers a completely new formal thinking, one in which there is a shift from the object and towards the autonomy of parts. This shift towards part-thinking leads to a problem about how a building can be understood not as a whole, but on the basis of the inter-relationships between its composing parts. It can be understood as a mereological shift from global compositions to part-hoods and fusions triggering compositions.
A departure from the more simplified whole-oriented thinking, tentacular thinking comes about as a new perspective, as an alternative to traditional ideologies and thinking processes. In the present economic and societal context, within a decentralised, autonomous and more transparent organisational framework, stakeholders function in a form that is akin to multiple players forming a cat’s cradle, a phenomenon which could be understood as being sympoietic. With increases in direct exchange, especially with the rise of blockchain and distributed platforms such as Airbnb, Uber, etc. in architecture, such participatory concepts push for new typologies and real estate models such as co-living and co-working spaces.
Fusion of Parts: Codividuality
In considering share-abilities and cooperative interactions between parts, the notions of a fusing part and a fused part emerge, giving rise to a multitude of possibilities spatially. Fusing parts fuse together to form a fused part which, at the same stage, behaves as another fusing part to perform more fusions with other fusing parts to form larger fused parts. The overlaps and the various assemblages of these parts gain relevance here, and this is what codividuality is all about.
As Haraway explains, it begins to matter “what relations relate relations.” Codividual comes about as a spatial condition that offers cooperative, co-living, co-working, co-existing living conditions. In the mereological sense, codividuality is about how fusing parts can combine to form a fused part, which in turn, can combine to form a larger fused part and so on. Conceptually, it can be understood that codividuality looks into an alternative method for the forming and fusing of spatial parts, thereby evolving a fusion of collectivist and individualist ideologies. It evolves as a form of architecture that is created from the interactions and fusion of different types of spaces to create a more connected and integrated environment. It offers the opportunity to develop new computing systems within architecture, allowing architectural systems to organise with automaton logic and behave as a sympoietic system. It calls for a rethinking of automata and computation.
Codividual can be perceived as a spatial condition allowing for spatial connectivities and, in the mereological sense, as a part composed of parts; a part and its parts. What is crucial is the nature of the organisation of these parts. An understanding of the meaning and history of the organisational concepts of autopoiesis and sympoiesis brings out this nature.
Autopoiesis: Towards Assemblages of Parts
The concept of autopoiesis stems from biology. A neologism introduced by Humberto Maturana and Francisco Varela in 1980, autopoiesis highlights the self-producing nature of living systems. Maturana and Varela defined an autopoietic system as one that “continuously generates and specifies its own organisation through its operation as a system of production of its own components.” A union of the Greek terms – autos, meaning “self” and, poiesis, meaning “organisation” – autopoiesis came about as an answer to questions cropping up in the biological sciences pertaining to the organisation of living organisms. Autopoiesis was an attempt to resolve the confusion between biological processes that depend on history such as evolution and ontogenesis, in contrast with those that are independent of history, like individual organisation. It questioned the organisations of living systems which made them a whole.
Varela et al pointed out autonomy as the characteristic phenomenon arising from an autopoietic organisation; one that is a product of a recursive operation. They described an autopoietic organisation as a unity; as a system, with an inherently invariant organisation. Autopoietic organisation can be understood as a circular organisation; as a system that is self-referential and closed. Jerome McGann, on the basis of his interpretation of Varela et al, described an autopoietic system as a “closed topological space, continuously generating and specifying its own organisation through its operation as a system of production of its own components, doing it in an endless turnover of components”.
What must be noted here is that the computational concept of self-reproducing automata is classically based on an understanding of a cell and its relation to the environment. This is akin to the conceptual premise of autopoiesis, which is the recursive interaction between the structure and its environment, thus forming the system. It must be noted that both the concepts start with a biological understanding of systems and then extend the concept. A direct link can be observed between the works of von Neumann, and Maturana and Varela. Automata, therefore, can be seen as an autopoietic system.
The sociologist, Niklas Luhmann, took forward this concept into the domain of social systems. His theoretical basis for the social systems theory is that all social events depend on systems of communication. On delving into the history of social or societal differentiation, Luhmann observes that the organisation of societies is based on functional differentiation. A “functionally differentiated society”, as he explains, comprises varying parallel functional systems that co-evolve as autonomous discourses. He discovers that each of these systems, through their own specific medium, evolve over time, following what Luhmann calls “self-descriptions”, bringing out a sense of autonomy in that respective system.
Following Maturana and Varela’s explanation, an autopoietic organisation may be viewed as a composite unity, where internal interactions form the boundary through preferential neighbourhood interactions, and not external forces. It is this attribute of self-referential closure that Luhmann adopts in his framework. This closure maintains the social systems within and against an environment, culminating in order out of chaos.
The Limits of Autopoietic Thinking
Beth Dempster, as a contradiction to Maturana and Varela’s proposition of autopoiesis, proposed a new concept for self-organising systems. She argues that heuristics based on the analogy of living systems are often incongruous and lead to misleading interpretations of complex systems. Besides, autopoietic systems tend to be homeostatic and are development oriented in their nature. Being self-producing autonomous units “with self-defined spatial or temporal boundaries”, autopoietic systems show a centralised control system and are consequently efficient. At the same time, such systems tend to develop patterns and become foreseeable. It is this development-oriented, predictable and bounded nature of autopoietic systems that poses a problem when such systems are scaled up.
Autopoietic systems follow a dynamic process that allows them to continually reproduce a similar pattern of relations between their components. This is also true for the case of automata. Moreover, autopoietic systems produce their own boundaries. This is the most limiting aspect of these concepts.
Autopoietic systems do not instigate the autonomy of parts, as they evolve on a prescribed logic. Instead, a more interesting proposition is one in which the interacting parts instigate a kind of feedback mechanism within the parts, leading to a response that triggers another feedback mechanism, and so on. Mario Carpo’s argument that in the domain of the digital, every consumer can be a producer, and that the state of permanent interactive variability offers endless possibilities for aggregating the judgement of many, becomes relevant at this juncture. What holds true in the context of autopoiesis is Carpo’s argument that fluctuations decrease only at an infinitely large scale, when the relations converge ideally into one design.
In the sympoietic context, however, this state of permanent interactive variability Carpo describes is an offer of the digital to incorporate endless externalised inputs. The need for sympoiesis comes in here. Sympoiesis maintains a form of equilibrium or moderation all along, but also, at the same time, remains open to change. The permanent interactive variability not only offers a multitude of situations but also remains flexible.
The limits to autopoietic thinking is what forms the basis for Dempster’s argument. In contradistinction to autopoiesis, she proposes a new concept that theorises on an “interpretation of ecosystems”, which she calls sympoietic systems. Literally, sympoiesis means “collective creation or organisation”. A neologism introduced by Dempster, the term, sympoiesis, explains the nature of living systems. Etymologically, it stems out from the Ancient Greek terms “σύν (sún, “together” or “collective”)” and “ποίησις (poíesis, “creation, production”)”. As Dempster explains, these are “collectively producing systems, boundaryless systems.”
Sympoietic systems are boundary-less systems set apart from the autopoietic by “collective, amorphous qualities”. Sympoietic systems do not follow a linear trajectory and do not have any particular state. They are homeorhetic, i.e., these systems are dynamical systems which return to a trajectory and not to a particular state. Such systems are evolution-oriented in nature and have the potential for surprising change. As a result of the dynamic and complex interactions among components, these systems are capable of self-organisation. Sympoietic systems, as Donna Haraway points out, decentralise control and information”, which gets distributed over the components.
Sympoiesis can be understood simply as an act of “making-with”. The notion of sympoiesis gains importance in the context of ecological thinking. Donna Haraway points out that nothing or no system can reproduce or make itself, and therefore, nothing is really absolutely autopoietic or self-organising. Sympoiesis reflects the notion of “complex, dynamic, responsive, situated, historical systems.” As Haraway explains, “sympoesis enlarges and displaces autopoesis and all other self-forming and self-sustaining system fantasies.”
Haraway describes sympoietic arrangements as “ecological assemblages”. In the purview of architecture, sympoiesis brings out a notion of an assemblage that could be understood as an architectural assemblage growing over sympoietic arrangements. Though sympoiesis is an ecological concept, what begins to work in the context of architecture is that the parts don’t have to be strict and they aim to think plenty; they also have ethics and synergies among each other. In sympoietic systems, components strive to create synergies amongst them through a cooperation and a feedback mechanism. It is the linkages between the components that take centre stage in a sympoietic system, and not the boundaries. Extrapolating the notion of sympoiesis into the realm of architecture, these assemblages can be conceived in Haraway’s words as “poly-spatial knottings”, held together “contingently and dynamically” in “complex patternings”. What become critical are the intersections or overlaps or the areas of contact between the parts.
Sympoietic systems strategically occupy a niche between allopoiesis and autopoiesis, the two concepts proposed by Maturana and Varela. The three systems are differentiated by various degrees of organisational closure. Maturana and Varela elaborate on a binary notion of organisationally open and closed systems. Sympoiesis, as Dempster explains steps in as a system that depends on external sources, but at the same time it limits these inputs in a “self-determined manner”. It is neither closed nor open; it is “organisationally ajar”. However, these systems must be understood as only idealised sketches of particular scenarios. No system in reality must be expected to strictly adhere to these descriptions but rather lie on a continuum with the two idealised situations as its extremes.
It is this argument that is critical. In the context of architecture and urban design, what potentially fits is a hybrid model that lies on the continuum of autopoiesis and sympoiesis. While autopoiesis can guide the arrangement or growth of the system at the macro level, sympoiesis must and should step in in order to trigger a feedback or a circular mechanism within the system to respond to externalities. What can be envisaged is therefore a system wherein the autopoietic power of a system constantly attempts to optimise the system towards forming a boundary, and simultaneously the sympoietic power of the system attempts to trigger the system for a more networked, decentralised growth and existence, and therefore, creates a situation where both the powers attempt to push the system towards an equilibrium.
Towards Poly-Spatial Knottings
In sympoiesis, parts do not precede parts. There is nothing like an initial situation or a final situation. Parts begin to make each other through “semiotic material involution out of the beings of previous such entanglements” or fused situations. In order to define codividuality and to identify differences, an understanding of classifying precedents is important. The first move is a simple shift from an object-oriented thinking to a parts-oriented thinking. Buildings are classified as having a dividual, individual and codividual character from the point of view of structure, navigation and program.
Codividual is a spatial condition that promotes shared spatial connections, internally or externally, essentially portraying parts composed of parts, which behave as one fused part or multiple fused parts. The fused situations fulfil the condition for codividuality as the groupings form a new inseparable part – one that is no longer understood as two parts, but as one part, which is open to fuse with another part.
Delving into architectural history, one can see very few attempts in the past by architects and urban designers towards spatial integration by sympoietic means. However a sympoietic drive can be seen in the works of the urban planner Sir Patrick Geddes. He was against the grid-iron plan for cities and practised an approach of “conservative surgery” which involved a detailed understanding of the existing physical, social and symbolic landscapes of a site. For instance, in the plan for the city of Tel Aviv in Israel (1925–1929), Geddes stitches together the various nodes of the existing town akin to assemblages to form urban situations like boulevards, thereby activating those nodes and the connecting paths.
Fumihiko Maki and Masato Oktaka also identify three broad collective forms, namely, compositional form, megastructures and group forms. Maki underscores the importance of linkages and emphasises the need for making “comprehensible links” between discrete elements in urban design. He further explains that the urban is made from a combination of discrete forms and articulated large forms and is therefore, a collective form and “linking and disclosing linkage (articulation of the large entity)” are of primary importance in the making of the collective form. He classifies these linkages into operational categories on the basis of their performance between the interacting parts.
Building upon Maki’s and Ohtaka’s theory of “collective form”, it is useful to appreciate that the architecture of a building can be thought of as a separate entity, and consequently there is an “inadequacy of spatial language to make meaningful urban environment.” Sympoiesis comes out through this notion of understanding the urban environment as an interactive fabric between the building and the context. Maki and Ohtaka also make an important comment that the evolution of architectural theory has been restricted to the building and describe collective forms as a concept which goes beyond the building. Collective forms can have a sympoietic or an autopoietic nature, which is determined by the organisational principles of the collective form. Sympoietic collective forms not only can go beyond the building, but also weave a fabric of interaction with the context. Although a number of modern cases of collective forms exist, most of the traditional examples of collective forms, however, have evolved into collective forms over time, albeit unintentionally.
The Corridor by Giorgio Vasari
An important case of an early endeavour in designing a collective form at an urban scale is Corridoio Vasariano by Giorgio Vasari in Florence, built in the year 1564. It can be understood as a spatial continuum that connects through the numerous important buildings or nodes within the city through a built corridor, resulting in a collective form. According to Michael Dennis, Vasari’s Corridor, in its absolute sense, is a Renaissance “insert” into the “fundamentally medieval fabric of central Florence”. As Dennis writes in The Uffizi: Museum as Urban Design (1980),
“…Each building has its own identity and internal logic but is also simultaneously a fragment of a larger urban organisation; thus each is both complete and incomplete. And though a
given building may be a type, it is always deformed, never a pure type. Neither pure object nor pure texture, it has characteristics of both – an ambiguous building that was, and still is, multifunctional…”
Dennis’s description for the design of the Vasari’s Corridor brings out the notion of spatial fusion of buildings as parts. The Corridor succeeds as an urban insert and this is primarily for two reasons. At first, it maintains the existing conditions and is successful in acclimatising to the context it is placed in. Secondly, it simultaneously functions on several varying scales, from that of the individual using the Corridor to the larger scale of the fabric through which it passes. The Vasari’s Corridor is a sympoietic urban fusion – one that is a culmination of the effect of local conditions.
Stan Allen, in contrast to compositions, presents a completely inverted concept for urban agglomerations. His concept of field configurations reflects a bottom-up phenomena. In his view, the design must necessarily reflect the “complex and dynamic behaviours of architecture’s users”. Through sympoiesis, the internal interaction of parts becomes decisive and they become relevant as they become the design drivers and the overall formation remains fluid and a result of the interactions between the internal parts.
Towards a Sympoietic Architecture
Another important aspect that forms a basis for the sympoietic argument is the relevance of information in systems. While Maturana and Varela explain that information must be irrelevant to self-producing systems since it is an extrinsically defined quantity, Dempster lays great emphasis on the relevance of information in sympoietic systems. Her explanation on the relevance of information is that it potentially carries a message or a meaning for a recipient. Information, therefore, is dependent on the context and recipient, but Stafford Beer hints that it is also “observer dependent”.
In the architectural domain, it signifies that information or external data input holds no relevance in an autopoietic system. The system grows purely on the basis of the encoded logic and part-to-part organisational relations, and is unrestricted and free from any possible input. However, information or data in the sympoietic paradigm gains relevance as it activates the system as a continuous flux of information guiding its organisation. This relates to the concepts of reinforced machine learning, wherein the system learns by heuristics to evolve by adapting to changing conditions, and by also producing new ones, albeit it comes with an inherent bias.
The Economic Offer of the Codividual
From an economic lens, the concept of sympoiesis does not exist at the moment. However, with the rise in participatory processes within the economy and the advent of blockchain, it shows immense potential in architecture. Elinor Ostrom’s work on the role of commons in decision-making influences the work of David Rozas, who researches on a model of blockchain-based commons governance. He envisages a system which is decentralised, autonomous, distributed and transparent, a more democratic system where each individual plays his/her own role. This idea is about bringing a more sympoietic kind of drive to blockchain. Sympoietic systems are based on a model that is akin to a commons-oriented or a blockchain-based economy that functions like a cat’s cradle with its multiple stakeholders being interdependent on each other. And as Jose Sanchez points out, it is the power of the discrete, interdependent system that makes this architecture possible. According to him, it offers a “participatory framework for collective production”.
The fusion of parts leads to the creation of parts such that the sum of the parts becomes greater than the whole. A codividual sympoietic model can potentially resolve the housing crisis since it flips the economic model to a bottom-up approach. With tokenisation, autonomous automatisation, decentralisation of power and transparency, this blockchain-based codividual model can compete with traditional real estate models, thereby resulting in more equitable and fair-minded forms of housing. As Lohry and Bodell point out, such models can reduce personal risk and also make livelihoods more economical and “community-oriented”.
The ecological framework of the concept of poiesis, as already outlined, is based on the growth from the organisation of elements. In the context of autopoiesis and sympoiesis, it can be observed that “part-to-part” and even “part-to-whole” conditions gain significant relevance in these concepts. An appreciation of these conditions, therefore, becomes relevant to understand these kinds of notions. The idea of components, as described by Dempster and Haraway in the purview of sympoiesis, and Jerome McGann in the autopoietic context, could be extended to architecture in the form of part-thinking.
However, a mereological approach begins with existing entities or “sympoietic interactions” and proceeds further with a description of their clusters, groupings and collectives. Through codividual sympoiesis, the whole gets distributed all over the parts. In this system, the discreteness of parts is never just discrete. It goes beyond the participating entities and the environment. In line with Daniel Koehler’s argument, the autonomy of the part ceases to be defined just as a self-contained object. It goes beyond it and begins to be defined “around a ratio of a reality, a point of view, a filter or a perspective”.
Sympoiesis evolves out of competitive or cooperative interactions of parts. As in ecology, these parts play symbionts to each other, in diverse kinds of relationalities and with varying degrees of openness to attachments and assemblages with other fusing parts depending on the number of embedded brains and the potential connectors. Traditionally, architecture is parasitic. When the aesthetic or the overall form drives the architecture, architectural elements act as a host for other architectural elements to attach to depending on composition. In sympoiesis, there is no host and no parasite. It inverts the ideology of modernism, beginning with not a composition but actually evolving a composition of “webbed patterns of situated and dynamic dilemmas” over symbiotic interaction. Furthermore, increasingly complex levels of quasi-individuality of parts come out of this process of codividual sympoiesis. It gives an outlook of a collective and still retains the identity of the individual. It can simply be called multi-species architecture or becoming-with architecture.
Talking of transdisciplinary ecologies and architecture, we can foresee string figures tying together human and nonhuman ecologies, architecture, technologies, sustainability, and more. This also gives rise to a notion of ecological fusion of spatial conditions such as daylight and ventilation, in addition to physical fusion of parts. Codividual sympoiesis, thus, even shows potential for a nested codividual situation, in that the parts sympoietically fuse over different spatial functions.
Going over sympoiesis and mereology, it makes sense to look for parts which fuse to evolve fused parts; to look for architecture through which architecture is evolved; to look for a codividuality with which another codividuality is evolved. From a mereological point of view, in a system in which the external condition overlaps with an internal part in the search for another component, to give rise to a new spatial condition over the fusion of parts could be understood as codividual sympoiesis. Codividual sympoiesis is therefore about computing a polyphony, and not orchestrating a cacophony.
 M. Foucault, Madness and Civilization (New York: Random House US, 1980).
 D. Haraway, Staying with the Trouble: Making Kin in the Chthulucene (Durham: Duke University Press, 2016), 30–57.
 Ibid, 35.
 H. R. Maturana and F. G. Varela, Autopoiesis And Cognition (Dordrecht, Holland: D. Reidel Pub. Co., 1980).
 H. R. Maturana, F. G. Varela, and R. Uribe, "Autopoiesis: The Organization Of Living Systems, Its Characterization And A Model," Biosystems, 5, 4, (1974), 187–196.
 J. McGann, A New Republic of Letters (Cambridge, Massaschusetts: Harvard University Press, 2014).
 A. W. Burks, Von Neumann's Self-Reproducing Automata; Technical Report (Ann Arbor: The University of Michigan, 1969).
 N. Luhmann, Art as a Social System (Stanford: Stanford University Press, 2000), 232.
 B. Dempster, Sympoietic and Autopoietic Systems : A new distinction for self-organizing systems (Waterloo: School of Planning, University of Waterloo, 1998).
 Ibid, 9.
 M. Carpo, The Second Digital Turn: Design Beyond Intelligence (Cambridge, Massachusetts: MIT Press, 2017), 131–44.
 Ibid, 12.
 B. Dempster, Sympoietic and Autopoietic Systems : A new distinction for self-organizing systems (Waterloo: School of Planning, University of Waterloo, 1998).
 D. Haraway, Staying with the Trouble: Making Kin in the Chthulucene (Durham: Duke University Press, 2016), 33.
 Ibid, 5.
 Ibid, 125.
 Ibid, 58.
 Ibid, 60.
 B. Dempster, Sympoietic and Autopoietic Systems : A new distinction for self-organizing systems (Waterloo: School of Planning, University of Waterloo, 1998).
 D. Haraway, Staying with the Trouble: Making Kin in the Chthulucene (Durham: Duke University Press, 2016), 60.
 F. Maki, and M. Ohtaka, Investigations in Collective Form (St. Louis: School of Architecture, Washington University, 1964), 3-17.
 M. Dennis, "The Uffizi: Museum As Urban Design", Perspecta, 16, 62 (1980), 72.
 Ibid, 63.
 S. Allen, "From Object to Field,” Architectural Design, AD 67, 5-6 (1997), 24–31.
 S. Beer, “Preface,” Autopoiesis: The Organization of the Living, auth. H. R. Maturana and F. Varela (Dordrecht, Holland: D. Reidel Publishing Company, 1980).
 D. Rozas, “When Ostrom Meets Blockchain: Exploring the Potentials of Blockchain for Commons Governance” (2019), https://davidrozas.cc/presentations/when-ostrom-meets-blockchain-exploring-potentials-blockchain-commons-governance-1, last accessed 3 May 2019.
 J. Sánchez, “Architecture for the Commons: Participatory Systems in the Age of Platforms,” Architectural Design, 89, 2 (2019), 22–29.
 M. Lohry and B. Bodell, "Blockchain Enabled Co-Housing" (2015), https://medium.com/@MatthewLohry/blockchain-enabled-co-housing-de48e4f2b441, last accessed 3 May 2019.
 D. Koehler, “Mereological Thinking: Figuring Realities within Urban Form,” Architectural Design, 89, 2 (2019), 30–37.
Object-oriented programming in blockchain has been a catalyst for philosophical research on the way blocks and their nesting are perceived. While attempting a deeper investigation on the composition of blocks, as well as the environment that they are able to create, concepts like Jakob von Uexkull’s “Umwelt” and Timothy Morton’s “Hyperobject” can be synthesised into a new term; the “Hyperumwelt”. The Hyperumwelt is an object that is capable of creating its own environment. By upscaling this definition of the Hyperumwelt, this essay describes objects with unique and strong compositional characteristics that act as closed black boxes and are able to create large scale effects through their distribution. Hyperobjects are able to create their own Umwelt, however when they are nested and chained in big aggregations, the result is a new and unexpected environment: the Hyperumwelt.
In his book Umwelt und die Innenwelt der Tiere (1921) Uexkull introduced the notion of subjective environments. With the term “Umwelt” Uexkull defined a new perspective for the contextualisation of experiences, where each individual organism perceives surrounding elements with their senses and reinterprets them into its own “Umwelt”, producing different results. An Umwelt requires two components: an individual and its abstracted perception of its surroundings. Based on this process and parameters, notions of parthood and wholeness in spatial environments, and the relations that they produce with interacting elements, become relevant.
Space as a Social Construction
For Bill Hillier and Julienne Hanson these two parameters related to society and space, writing that “society can only have lawful relations to space if society already possesses its own intrinsic spatial dimension; and likewise space can only be lawfully related to society if it can carry those social dimensions in its very form.” What Hillier and Hanson argue is that the relation between the formation of society and the space is created by the interaction between differing social environments. Hillier and Hanson essentially make use of a mereological definition of the environment that states that parts are independent of their whole, the way that society is independent from its space, but at the same time societies contain definitions of space. Space is therefore a deeply social construction.
As Hillier and Hanson outline, our understandings of space are revealed in the relations between “social structure” and “spatial structure”, or how society and space are shaped under the influence of each other. Space is a field of communication. Within a network of continuously exchanged information, space can be altered as it interacts with the people in it. However, this approach can only produce limited results as it creates environments shaped by only two parameters, humans and space. At this point is where Hillier and Hanson’s theory fails, as this way of understanding the environment relies only on additive information produced by interactions. If we were to expand this theory into the kind of autonomous learning mechanism that is mandatory for processing today’s computational complexity, we would end up with a slow, repetitive operation between these two components.
Hyperobjects to Hyperumwelt
Another perspective that is elusive from Hillier and Hanson’s understanding of the environment is how social behaviour is shaped by spatial parameters. Timothy Morton’s object-oriented ontological theory contradicts this anthropocentric understanding of the world. In The Ecological Thought (2010) Morton presents the idea that not only do we produce the environment but we are also a product of it. This means that the creation of things is not solely a human act non-human objects cannot partake in, but rather an inherent feature of any existing object. For Morton, complexity is not only a component of society and space, but extends complexity to an environment that has objects as its centre and thus cannot be completely understood. He calls these entities ‘Hyperobjects”.
While Morton uses the term Hyperobject to describe objects, either tangible or intangible, that are “massively distributed in time and space as to transcend spatiotemporal specificity”. The term can be reinterpreted to describe an environment, rather than an object, which is neither understandable nor manageable. This environment – a Hyperumwelt – is the environment constructed by Hyperobjects. A Hyperumwelt is beyond comprehension due to its complexity.
The term Hyperobject is insufficient as it retains its own wholeness. This means that all components inside a Hyperobject cannot be seen (as it acts like a black box of information) but can only be estimated. Morton described the Hyperobject as a whole without edges. This stems from Morton’s point of perception, as he puts himself inside of the object. This position makes him unable to see its wholeness and thus it leaves him adrift of its impact, unable to grasp control of it. Here, also, the discussion opens about authorship inside the environments and what Morton suggests is that Hyperobjects have their own authority and there is nothing that can alter them or specify their impact on the environment.
A Tree in a Forest
Yet there is also no need for the Hyperobjects to be clearly understandable. In terms of the Hyperumwelt, Hyperobjects can remain vast and uncomprehended. What is now needed are the implications of distributing nested Hyperobjects, seen as black boxes, inside an environment. An Umwelt is an environment constantly altered by the perceived information. This makes the Hyperumwelt a whole with porous edges that allows the distribution, and the addition or subtraction, of information. Another difference is the external position that the Hyperumwelt is perceived from, meaning that there is no need for it to be part of the environment. Since what is important is the distribution of the objects within the Hyperumwelt, a distant point of view is needed in order to detect the patterning of the distributed objects. While it will remain difficult to decipher and discretise the components, the patterns that are created can be seen.
While the Hyperobject is a closed whole of parts that cannot be altered, a Hyperumwelt is an open whole of wholes that uses objects as its parts. So, while the Hyperobject gives us no authority over its consequences, the Hyperumwelt bypasses this in order for its wholeness to be controlled. Yet what is important for the Hyperumwelt is not the impact of one object, but the impact of multiple objects within the environment. This synthesis and merging of objects and their relations produces a new reality which may or may not be close to the reality of the single objects. A Hyperobject is looking at a black box – say, a tree – and knowing there is a pattern – such as a forest – and a Hyperumwelt is looking at the tree and knowing the impact that it has on the forest and the impact that the forest creates in the environment.
 J. von Uexküll, Umwelt und Innenwelt der Tiere (Berlin: J. Springer, 1909), 13-200.
 T. Morton, Hyperobjects: Philosophy and Ecology After the End of the World (Minneapolis, Minnesota: University of Minnesota Press, 2013).
 J. von Uexküll, Umwelt und Innenwelt der Tiere (Berlin: J. Springer, 1909), 13-200.
 B. Hillier and J. Hanson, The Social Logic of Space (London: Cambridge University Press, 1984), 26.
 T. Morton, The Ecological Thought (Cambridge, Massachusetts: Harvard University Press, 2010).
 Ibid, 110.
 T. Morton, Hyperobjects: Philosophy and Ecology After the End of the World (Minneapolos, Minnesota: University of Minnesota Press, 2013).
 T. Morton, Being Ecological (Penguin Books Limited, 2018).
In mereology, the distinction of “dependent” or “independent” could be used to describe the relationship between parts and wholes. Using a mereological description, individuals can be seen as self-determining entities independently identiﬁed by themselves as a whole. On the other hand, the identities of collectives are determined by their group members which participate in a whole. Therefore, based on parthood theory, an individual could be deﬁned as a self-determined “one in a whole”; in contrast, collectives could be seen as “a part within a whole”. Following the mereological logic, this paper surveys the new term “codividuality”, a word consisting of the combined meaning of “collective” and “individuality”. Codividuality preserves the intermediate values of individualism and collectivism. It consists of the notion of share-ability beneﬁted from collectivism, and is merged with the idea of self-existence inspired by individualism. The characterisation of codividuality starts from individuals that share features, and are grouped, merging with other groups to compose new clusters.
“Codividuals” could also be translated into “parts within parts”. Based on this part-to-part relation, codividuals in the sense of composition begin with existing individuals and then collectives of self-identiﬁed parts. Parts are discrete, but also participating entities in an evolving self-organising system. Unlike individuals’ self-determination, parts’ identities contribute by participating, forming a strong correlation in-between parts but preserving autonomy of parts. In codividuality, each individualistic entity obtains the potential of state-transforming by sharing its identity with others; as such, all parts are able to translate one another, and are irreducible to their in-between relationship. From an ontological perspective, the existence of a part is not from adding a new object but by sharing features to fuse itself into a new part. A new part does not contribute by increasing an entity’s quantity but through a dynamic overlap transforming over time. Since the involved entities fuse into new collectives, the compositing group will simultaneously change its form by corresponding to sharing features; as such, codividuality could be seen as an autonomous fusion.
Metabolism: As One in Whole
According to the definition of individualism, each individual has its own autonomous identity and the connectivity between individuals is loose. In architecture, social connectivity provides insight on the relationship of spatial sequences within cultural patterns. Metabolism, as an experimental architectural movement in post-war Japan, emerged with a noticeable individualist approach, advocating individual mobility and liberty. Looking at the conﬁgurations and spatial characteristics in Metabolist architecture, it is easy to perceive the features of “unit” and “megastructure” as the major architectural elements in the composition, showing the individualistic characterisation in spatial patterns. Megastructure as an unchangeable large-scale infrastructure conceptually served to establish a comprehensible community structure. The unit as a structural boundary reinforced the identity of individuals in the whole community.
The Nakagin Capsule Tower (1970) by Kisho Kurokawa is a rare built example of Metabolism. It is a residential building consisting of two reinforced concrete towers, and the functional equipment is integrated into the megastructure forming a system of a core tower that serves its ancillary spaces. The functional programmes required for the served spaces are extended from the core where the structure and pipes are integrated. The identical, isolated units contain everything to meet basic human needs in daily life, which expresses the idea of individualism in architecture that is aimed for a large number of habitants. The independent individual capsules create a maximum amount of private space with little social connectivity to neighbours.
Constructivism: As Parts in Whole
Collectivism could be applied to a society in which individuals tie themselves together into a cohesion which obtains the attributes of dependence, sharing and collective beneﬁt. This is aligned to the principles of constructivism, proposing the collective spatial order to encourage human interaction and generate collective consciousness. In contrast to the Metabolists, constructivist architecture underlined spatial arrangements for public space within compressed spatial functions that enable a collective identiﬁcation.
The Narkomﬁn Building (1928–1932) by OSA Group is one of the few realised constructivist projects. The building is a six-story apartment located in a long block designed as a “social condenser”. It consists of multiple social functions that correspond to speciﬁc functional and constructive norms for working and living space within whole community. The main building is a mix-use compound with one part for individual space and another designed as collective space. The private and common space are linked by an exterior walkway as a communal rooftop garden. There are 54 living units, and each of them only contain bedroom and bathroom. Each ﬂat could be divided into two, one in which contains a playground and kitchen; the other one, a collective function area, which consists of garden, library and gymnasium. The corridors linking the ﬂats are wide and open, appearing as an urban street to encourage inhabitants to stop and communicate with their neighbours.
Compared with the Nagakin Capsule Tower, the concept behind the spatial arrangement of Narkomﬁn Building is the collectivism of all needed programs. The large-scale collective was proposed as a means to replicate the concept of the village in the city. Practically this allows for a shrinking of the percentage of private space while stimulating the social interaction within the collective living space. The concept of amplifying communal space aligns to the constructivist movement through the concept of reinventing people’s daily life by new socialist experimental buildings, reinforcing the identity of collectives within the whole community.
Codividuality: As Parts in Parts
In architecture, the word “codividuality” originally emerged in the Japanese architectural exhibition House Vision (2019) to refer to collective living in terms of the sharing economy, delivering a social meaning: “creating a new response to shared-living in the age of post- individualism”. Economically speaking, codividuality expresses the notion of share-ability in sense of sharing value and ownership. Moreover, it offers a participatory democracy for spatial use in relationship to changing social structures and practices. The architectural applications of codividuality are not merely about combined private space with shared public facilities but reveal a new reality that promotes accessibility and sustainability in multiple dimensions, including spatial use, economy and ecology.
Share House LT Josai (2013) is a collective-living project in Japan, offering an alternative for urban living in the twenty-first century sharing economy. Due to the change of demographic structure and rapidly rising house prices, Naruse Inokuma Architects created an opportunity to continually share spaces with unrelated people by creating an interactive living community in a two-and-a-half-story house. The 7.2 square meter individual rooms are three-dimensionally arranged across the two and a half levels. Between the bedrooms are the shared spaces, including a void area and an open plan living platform and kitchen that extend toward identical private rooms. The juxtaposition of private and communal spaces creates a new spatial conﬁguration and an innovative living model in the sharing economy. Codividuality obtains individuals’ autonomy and, on the other hand, encourages collective interaction. It is not an opposition to individualism nor a replication of collectivism, but a merged concept starting from individualism, then juxtaposing against the notion of collectivism.
Autonomy of Parts
In contemporary philosophy, “Object Oriented Ontology” (OOO) proposes a non-human way of thinking, unshackling objects from the burden of dominant ideologies. Objects are withdrawn from human perception, thereby containing the autonomy and irreducibility of substance. Accordingly, what this autonomy is based on is the independence of the object itself. An individual object is not reliant on any other objects, including humans. Objects exist whether we are aware of them or not. Objects do not need to passively rely on human cognition to represent themselves, but self-evidently and equally stand in the world.
OOO enables a transition in architectural meaning from architecture as autonomous objects to interactive relationships between object and field, where indirect relations between autonomous objects are observed. In an ecological sense, the reason behind this shift could be understood as an irreducibility of the architectural relationship within the environment; in other words, an architectural object cannot be withdrawn from its relation to context. As Timothy Morton writes, “all the relations between objects and within them also count as objects”, and David Ruy states in his recent essay, “the strange, withdrawn interaction between objects sometimes brings forth a new object.” Ruy emphasises the relation between objects based on a dynamic composition interacted with by individuals that is not a direct translation of nature.
In an object-orientated ontology, architecture is not merely an individual complete object but fused parts. This could be translated into a mereological notion of shifting from wholeness to parts. As a starting point for a design methodology, extracting elements from buildings represents loosening the more rigid system found in a modernist framework, by understanding architectural parts as autonomous and self-contained. Autonomous architectural elements cannot be reduced to the individual parts that make up the whole. This shift opens up an unprecedented territory in architectural discourse. Autonomous architectural parts now can participate in a non-linear system involving not only input or output, beginning or end, or cause or result; architecture can be understood as part of a process.
Architecture in the Sharing Economy
The rise of the sharing economy in the past decade has provided alternatives to the traditional service economy, allowing people to share and monetise their private property and shift thinking around privacy. In this context the following question arises: how could mereological architecture reveal new potentials beyond the inhabitation of buildings by engaging with the sharing economy? Due to the financialisation of the housing market and, simultaneously, the standardisation and lowering of quality of housing standards due to deregulation of the market, this question is even more pressing. Furthermore the bureaucracy of the planning system limits the architectural designing process by slowing development down and restricting innovation. In this context the reconfiguration of housing to emphasise collective space could be an alternative living model, alongside financial solutions such as shared ownership.
Decentralised Autonomous Organisation
The notion of a Decentralised Autonomous Organisation (DAO) seems fitting for furthering this discussion. In economic and technological terms, DAO is a digital organisation based on blockchain technologies, offering a decentralised economic model. As an alternative to centralised economic structures within a capitalist system, DAO beneﬁts from blockchain technology as a digital tool for achieving a more transparent, accessible and sustainable economic infrastructure. This involves shifting decision-making away from centralised control and giving the authority to individual agents within the system.
In the Medium article “The Meaning of Decentralisation” by Vitalik Buterin, Buterin describes a decentralised system as a collective of individual entities that operate locally and self-organise, which supports diversity. Distribution enables a whole to be discretised into parts that interact in a dynamic computing system that evaluates internal and external connectivity between parts. Through continuous interaction, autonomous discrete entities occasionally form chains of connectivity. In this process the quantities of parts at junctions continuously change. Over time patterns emerge according to how entities organise both locally and globally. Local patterns internally influence a collective while global patterns influence between collectives – or externally in a field of patterns – similar to Stan Allen’s notion of a “field condition”. This creates global complexity while sustaining autonomy through local connectivity.
Codividuality could be seen as a post-individualism, where a diverse self-organising system withdraws power from capitalist authorities. The process of decentralisation characteristic of DAO is key to codividuality for it allows repeated patterns to form in a connected network. Architecturally, in codividual space each spatial unit consists of an open-ended program and self-contained structure, which means that architectural elements such as walls or slabs exist not for a specific function but serve a non-representational conﬁguration.
Through computing codividual connectivity, autonomous spatial units start to overlap with other units, generating varying states of spatial use and non-linear circulation. What this distribution process offers is an expanded field of spatial iterations, using computation to respond to changes in quantity or type of inhabitants. In this open-ended system, codividual parts provide each spatial participant the capability to overcome the limitation of scalability through autonomous interconnection supported by a distributed database.
Unlike conventional planning in a modernist framework, codividual space does not aim for a module system that is used for the arrangement of programme, navigation or structure but for a non-figurative three-dimensional spatial sequence. The interconnections between parts and the ﬁeld enable scalability from the smaller scale of spatial layouts towards large-scale urban formations. This large-scale fusion of codividual space generates a more fragmented, heterogeneous and interconnected spatial order, balancing collective benefit and individual freedom. In this shifting towards heterogeneity, codividuality opens a new paradigm of architecture in the age of the sharing economy.
 H. C. Triandis, Individualism And Collectivism (Boulder: Westview Press, 1995).
 “Mereological Thinking: Figuring Realities within Urban Form,” Architectural Design, 89, 2 (2019), 30–37.
 Z. Lin, Kenzo Tange And The Metabolist Movement (London: Routledge, 2010).
 D. Udovicki-Selb, M. J. Ginzburg, I. F. Milinis. Narkomfin, Moscow 1928-1930 (Tübingen: Wasmuth Verlag, 2016).
 "HOUSE VISION", HOUSE VISION (2019), http://house-vision.jp/, accessed 9 May 2019.
 L. Bryant, The Democracy of Objects, (Open Humanities Press, 2011).
 T. Morton. The Ecological Thought (Cambridge: Harvard University Press, 2010).
 D. Ruy, “Returning to (Strange) Objects”, TARP Architecture Manual: Not Nature. (Brooklyn, New York: Pratt Institute Graduate School of Architecture, 2015).
 V. Buterin, “The Meaning of Decentralization” (2017), https://medium.com/@VitalikButerin/the-meaning-of-decentralization-a0c92b76a274, accessed 9 May 2019.
 S. Allen and G. Valle, Field Conditions Revisited (Long Island City, NY: Stan Allen Architect, 2010).