ISSN 2634-8578


03/08/2022
Climatic Energy and Ecological Autonomy
There is no way back to the climate that we once knew: “our old world, the one that we have inhabited for the last 12,000 years, has ended”.[1] Accepting this end presents an opportunity to reframe considerations of risk, indeterminacy, and danger as questions of restructuring and rewilding; shifting the discussion of global warming from a matter of a scarcity of resources to an abundance of energy that can kick-start landscape futures.
To engage this future, it is critical to set up some terms for how design will engage with the multitude of potential climates before us. Rather than working preventatively by designing solutions that are predicated on the simplification of the environment by models, we advocate for an experimentalism that is concerned with the proliferation of complexity and autonomy in the context of radical change. Earth systems are moving hundreds to thousands of times faster than they did when humans first documented them. This acceleration is distributed across such vast space and time scales that the consequences are ubiquitous but also unthinkable, which sets present-day Earth out of reach of existing cognitive tools. For example, twenty- to fifty-year decarbonisation plans are expected to solve problems that will unfold over million-year timescales.[2] These efforts are well-intentioned but poorly framed; in the relentless pursuit of a future that looks the same as the past, there is a failure to acknowledge that it is easier to destroy a system than it is to create one, a failure to acknowledge the fool’s errand of stasis that is embodied in preservation, and most importantly, a failure to recognise that climate change is not a problem to be solved.[3] Climate “solutions” are left conceptually bankrupt when they flatten complex contexts into one-dimensional problem sets that are doomed by unknowable variability. From succession, to extinction, to ocean biochemistry, to ice migration; our understanding of environmental norms has expired.[4]
The expiration of our environmental understanding is underlined by the state of climate adaptation today – filled with moving targets, brittle infrastructures, increasing rates of failure, and overly complicated management regimes. These symptoms illustrate the trouble contemporary adaptation has escaping the cognitive dissonance of the manner in which knowledge about climate change is produced: the information has eclipsed its own ideological boundaries. This eclipse represents a crisis of knowledge, and therefore must give rise to a new climatic form. Changing how we think and how we see climatic energy asks us to make contact with the underlying texture and character of this nascent unruliness we find ourselves in, and the wilds that it can produce.
Earth’s new wilds will look very different from the wilderness of the past. Classical wilderness is characterised by purity: it is unsettled, uncultivated, and untouched. But given the massive reshaping of ecological patterns and processes across the Earth, wilderness has become less useful, conceptually. Even in protected wilderness areas, “it has become a challenge to sustain ecological patterns and processes without increasingly frequent and intensive management interventions, including control of invading species, management of endangered populations, and pollution remediation”.[5] Subsequently, recent work has begun to focus less on the pursuit of historical nature and more on promoting ecological autonomy.[6, 7, 8] Wildness, on the other hand, is undomesticated rather than untouched. The difference between undomesticated and untouched means that design priorities change from maintaining a precious and pure environment to creating plural conditions of autonomy and distributed control that promote both human and non-human form.
Working with wildness requires new ways of imagining and engaging futurity that operate beyond concepts of classical earth systems and the conventional modelling procedures that re-enact them, though conventional climate thinking, especially with the aid of computation, has achieved so much: “everything we know about the world’s climate – past, present, future – we know through models”.[9] Models take weather, which is experiential and ephemeral, abstract it into data over long periods of time, and assemble this data into patterns. Over time, these patterns have become increasingly dimensional. This way of understanding climate has advanced extremely quickly over the past few decades, enough that we can get incredibly high-resolution pictures (like the one below, which illustrates how water temperature swirls around the earth). Climate models use grids to organise their high-resolution, layered data and assign it rules about how to pass information to neighbouring cells. But the infinite storage capacity of the grid cells and the ways they are set up to handle rules and parameters create a vicious cycle, by enabling exponential growth toward greater and greater degrees of accuracy. Models get bigger and bigger, heavier and heavier, with more and more data; operating under the assumption that collecting enough information will eventually lead to the establishment of a perfect “control” earth,[10] and to an earth that is under perfect control. But this clearly isn’t the case, as for these models, more data means more uncertainty about the future. This is the central issue with the traditional, bottom-up climate knowledge that continues to pursue precision. It produces ever more perfect descriptions of the past while casting the future as more and more obscene and unthinkable. In other words, in a nonlinear world, looking through the lens of these bottom-up models refracts the future into an aberration.[11]

The technological structure of models binds us to a bizarre present. It is a state which forecloses the future in the same way that Narcissus found himself bound to his own reflection. When he saw his reflection in a river, he “[mistook] a mere shadow for a real body” and found himself transfixed by a “fleeting image”.[12] The climatic transfixion is the hypnotism of the immediate, the hypothetically knowable, which devalues real life in favour of an imaginary, gridded one. We are always just a few simulations from perfect understanding and an ideal solution. But this perfection is a form of deskilling which simulates not only ideas but thinking itself. The illusion of the ideal hypothetical solution, just out of reach, allows the technical image to operate not only as subject but as project;[13] a project of accuracy. And the project of making decisions about accuracy in models then displaces the imperative of making decisions about the environments that the models aim to describe by suspending us in the inertia of a present that is accumulating more data than it can handle.
It is important to take note of this accumulation because too much information starts to take on its own life. It becomes a burden beyond knowledge,[14] which makes evident that “without forgetting it is quite impossible to live at all”.[15] But rather than forget accumulated data and work with the materiality of the present, we produce metanarratives via statistics. These metanarratives are a false consciousness. Issues with resolution, boundary conditions, parameterization, and the representation of physical processes represent technical barriers to accuracy, but the deeper problem facing accuracy is the inadequacy of old data to predict new dynamics. For example, the means and extremes of evapotranspiration, precipitation and river discharge have undergone such extreme variation due to anthropogenic climate change that fundamental concepts about the behaviour of earth systems for fields like water resource management are undergoing radical transformation.[16] Changes like this illustrate how dependence upon the windows of variability that statistics produce is no longer viable. This directly conflicts with the central conceit of models: that the metanarrative can be explanatory and predictive. In his recently published book, Justin Joque challenges the completeness of the explanatory qualities of statistics by underlining the conflicts between its mathematical and metaphysical assumptions.[17] He describes how statistics (and its accelerated form, machine learning) are better at describing imaginary worlds than understanding the real one. Statistical knowledge produces a way of living on top of reality rather than in it.

The shells of modelled environments miss the materiality, the complexity and the energy of an ecosystem breaking apart and restructuring itself. The phase of a system that follows a large shift is known as a “back loop” in resilience ecology,[18, 19] and is an original and unstable period of invention that is highly contingent upon the materials left strewn about in the ruins of old norms. For ecological systems in transition, plant form, geological structure, biochemistry and raw materiality matter. These are landscape-scale issues that are not described in the abstractions of parts per million. High-level knowledge of climate change, while potentially relevant for some scales of decision-making, does not capture the differentiated impacts of its effects that are critical for structuring discussions around the specific ways that environments will grow and change, degrade or complexify through time.
This is where wilds can play a role in structuring design experimentation. Wildness is unquestionably of reality, or a product of the physical world inhabited by corporeal form. Wilds as in situ experiments become model forms, which have a long epistemological history as a tool for complex and contingent knowledge. Physicists (and, here, conventional climate modellers) look to universal laws to codify, explain and predict events, but because medical and biological scientists, for example, do not have the luxury of stable universalism, they often use experiments as loose vehicles for projection. By “repeatedly returning to, manipulating, observing, interpreting, and reinterpreting certain subjects—such as flies, mice, worms, or microbes—or, as they are known in biology, ‘model systems’”, experimenters can acquire a reliable body of knowledge grounded in existing space and time.[20] This is how we position the project of wildness, which can be found from wastewater swamps, to robotically maintained coral reefs, to reclaimed mines and up-tempo forests. Experimental wilds, rather than precisely calculated infrastructures, have the potential to do more than fail at adapting to climate: they can serve “not only as points of reference and illustrations of general principles or values but also as sites of continued investigation and reinterpretation”.[21]
There is a tension between a humility of human smallness and a lunacy in which we imagine ourselves engineering dramatic and effective climate fixes using politics and abstract principles. In both of these cases, climate is framed as being about control: control of narrative, control of environment. This control imaginary produces its own terms of engagement. Because its connections to causality, accuracy, utility, certainty and reality are empty promises, modelling loses its role as a scientific project and instead becomes a historical, political and aesthetic one. When the model is assumed to take on the role of explaining how climate works, climate itself becomes effectively useless. So rather than thickening the layer of virtualisation, a focus on wild experiments represents a turn to land and to embodied changes occurring in real time. To do this will require an embrace of aspects of the environment that have been marginalised, such as expanded autonomy, distributed intelligence, a confrontation of failure, and pluralities of control. This is not a back-to-the-earth strategy, but a focus on engagement, interaction and modification; a purposeful approach to curating climatic conditions that embraces the complexity of entanglements that form the ether of existence.
References
[1] M. Davis, “Living on the Ice Shelf”, Guernica.org https://www.guernicamag.com/living_on_the_ice_shelf_humani/, (accessed May 01, 2022).
[2] V. Masson-Delmotte, P. Zhai, A. Pirani, S.L. Connors, C. Péan, S. Berger, N. Caud, Y. Chen, L. Goldfarb, M.I. Gomis, M. Huang, K. Leitzell, E. Lonnoy, J.B.R. Matthews, T.K. Maycock, T. Waterfield, O. Yelekçi, R. Yu, and B. Zhou (eds.)]. IPCC, 2021: Climate Change 2021: The Physical Science Basis. Contribution of Working Group I to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change (Cambridge University Press, Cambridge, UK and New York, USA, 2021) doi:10.1017/9781009157896.
[3] R, Holmes, “The problem with solutions”, Places Journal (2020).
[4] V. Masson-Delmotte, P. Zhai, A. Pirani, S.L. Connors, C. Péan, S. Berger, N. Caud, Y. Chen, L. Goldfarb, M.I. Gomis, M. Huang, K. Leitzell, E. Lonnoy, J.B.R. Matthews, T.K. Maycock, T. Waterfield, O. Yelekçi, R. Yu, and B. Zhou (eds.)]. IPCC, 2021: Climate Change 2021: The Physical Science Basis. Contribution of Working Group I to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change (Cambridge University Press, Cambridge, UK and New York, USA, 2021) doi:10.1017/9781009157896.
[5] B. Cantrell, L.J. Martin, and E.C. Ellis, “Designing autonomy: Opportunities for new wildness in the Anthropocene”, Trends in Ecology & Evolution 32.3 (2017), 156-166.
[6] Ibid.
[7] R.T. Corlett, “Restoration, reintroduction, and rewilding in a changing world”, Trends in Ecology & Evolution 31 (2016), 453–462
[8] J. Svenning, et al., “Science for wilder Anthropocene: Synthesis and future directions for trophic rewilding research” Proceedings of the National Academy of Sciences 113 (2015), 898–906
[9] P. N. Edwards, A vast machine: Computer models, climate data, and the politics of global warming (MIT Press, Cambridge, 2010).
[10] P. N. Edwards, “Control earth”, Places Journal (2016).
[11] J. Baudrillard, Cool Memories V: 2000-2004, (Polity, Oxford, 2006).
[12] Ovid, Metamorphoses III, (Indiana University Press, Bloomington, 1955), 85
[13] B. Han, Psychopolitics: Neoliberalism and new technologies of power, (Verso Books, New York, 2017).
[14] B. Frohmann, Deflating Information, (University of Toronto Press, Toronto, 2016).
[15] F. Nietzsche, On the Advantage and Disadvantage of History for Life, (1874).
[16] P. C. D. Milly, et al. “Stationarity is dead: whither water management?”, Science 319.5863 (2008), 573-574.
[17] J. Joque, Revolutionary Mathematics: Artificial Intelligence, Statistics and the Logic of Capitalism, (Verso Books, New York, 2022).
[18] Gunderson and Holling, 2001; and Holling, “From complex regions to complex worlds”, Ecology and Society, 9, 1 (2004), 11.
[19] S. Wakefield, Anthropocene Back Loop (Open Humanities Press, 2020).
[20] A. N. H. Creager, et al., eds. Science without laws: model systems, cases, exemplary narratives (Duke University Press, Durham, 2007).
[21] Ibid

29/04/2022
What’s the Hook? Social Architecture?
Isa Genzken’s work can be seen as a synthesis of the “social” and the “object” – a visual-sculptural art that reflects on the relationship between social happenings and the scale of architectural space. She was also one of the early explorers in the use of computation for art, collaborating with scientists in the generation of algorithmic forms in the 70s. But what is the social object? What can it mean for architecture? Just as Alessandro Bava, in his “Computational Tendencies”,[1] challenged the field to look at the rhythm of architecture and the sensibility of computation, Roberto Bottazzi’s “Digital Architecture Beyond Computers”[2] gave us a signpost: the urgency is no longer about how architectural space can be digitised, but ways in which the digital space can be architecturised. Perhaps this is a good moment for us to learn from art; in how it engages itself with the many manifestations of science, while maintaining its disciplinary structural integrity.
Within the discipline of architecture, there is an increasing amount of research that emphasises social parameters, from the use of big data in algorithmic social sciences to agent-based parametric semiology in form-finding.[3] [4] The ever-mounting proposals that promise to apply neural networks and other algorithms to [insert promising architectural / urban problem here] is evidence of a pressure for social change, but also of the urge to make full use of the readily available technologies at hand. An algorithm is “a process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer”.[5] It is a finite sequence, well-defined, with performance based on the length of code – how fast and best can we describe the most. In 1975, Gregory Chaitin’s formulation of Algorithmic Information Theory (AIT) reveals that the algorithmic form is not anymore what can be visualised on the front-end, but “the relationship between computation and information of computably generated objects, such as strings or any other data structure”.[6] In this respect, what stands at the convergence of computable form and the science of space is the algorithmic social object.

Social science is the broad umbrella that encompasses disciplines from history and economics, to politics and geography; within which, sociology is a subset that studies the science of society.[7] The word ‘sociology’ is a hybrid, coined by French philosopher Isidore Auguste Comte in 1830 “from Latin socius ‘associate’ + Greek-derived suffix –logie”; more specifically, “social” as the adjective dates from the 1400s, meaning “devoted to or relating to home life”; and 1560s as “living with others”.[8] The term’s domestic connotation soon accelerated from the realm of the private to the public: “Social Contract” from translations of Rousseau in 1762; “Social Darwinism” and “Social Engineering” introduced by Fisher and Marken in 1877 and 1894; “Social Network” and “Social Media” by the late 20th century from Ted Nelson. Blooming during a high time of the Enlightenment and the rise of the positivist worldview, sociology naturally claims itself to be a science, of scientific methods and empirical investigations. The connotation of –logie has been brilliantly attested by Jonathan Culler:[9]
“Traditionally, Western philosophy has distinguished ‘reality’ from ‘appearance’, things themselves from representations of them, and thought from signs that express it. Signs or representations, in this view, are but a way to get at reality, truth, or ideas, and they should be as transparent as possible; they should not get in the way, should not affect or infect the thought or truth they represent.”
To claim a social study as a science puts forward the question of the relationship between the language that is used to empirically describe and analyse the subject with the subject matter itself. If it should be objectively and rationally portrayed, then the language of mathematics would seem perfect for the job. If we are able to describe the interaction between two or more people using mathematics as a language, then we may begin to write down a partial differential equation and map the variables of it.[10] Algorithms that are inductively trained on evidence-based data do not only seem to capture the present state of such interaction, but seem also able to give critical information in describing the future evolution of the system. This raises the question of computability: what is the limit to social computation? If there is none, then we might as well be a simulation ourselves; so the logic goes that there must be one. To leave an algorithm running without questioning the limits to social computation is like having Borel’s monkey hitting keys at random on a typewriter, or to apply [insert promising algorithm here] arbitrarily for [insert ear-catching grand challenges here].

What’s the hook?
A hook “is a musical idea, often a short riff, passage, or phrase, that is used in popular music to make a song appealing and to catch the ear of the listener”.[11] It is a monumental part of Web 2.0 that takes user attention as a scarce resource and a valuable commodity – an attention economy. Music is an artform that takes time to comprehend; as it plays through time, it accrues value in your attention.

This is one of the most famous hooks of the late 2000s – Empire State of Mind came around the same time as the Web 2.0 boom, just after New York had recovered from the dotcom bubble. The song was like an acoustic montage of the “Eight million stories, out there in the naked’, revealing an underlying urge for social change that was concealed by the boom; just as we see Jay-Z in Times Square on stage under the “big lights that inspired” him rapping: “City is a pity, half of y’all won’t make it”.[12] It was an epoch of R&B, rhythms of cities, of the urban sphere, of the hightech low life. Just the first 15 seconds of Jay-Z’s beat is already enough to teleport a listener to Manhattan, with every bit of romanticism that comes with it. The Rhythms and the Blues constructed a virtual space of narrative and story-telling; such spatial quality taps into the affective experiences of the listener through the ear, revealing the urban condition through its lyrical expression. It is no accident that the 2000s was also a time when the artist / sculptor Isa Genzken began exploring the potential of audio in its visual-sculptural embodiment.
“The ear is uncanny. Uncanny is what it is; double is what it can become; large [or] small is what it can make or let happen (as in laisser-faire, since the ear is the most [tender] and most open organ, the one that, as Freud reminds us, the infant cannot close); large or small as well the manner in which one may offer or lend an ear.” — Jacques Derrida.[13]

An image of a woman’s ear was placed on a facade by Genzken, personifying the building as a listener, hearing what the city has to say. At the same time, “The body is objectified and made into a machine that processes external information”.[14] The ear also symbolises the power of voice that could fill a place with a space: an acoustic space. As much as a place is a location, geographically tagged, and affects our identity and self-association of belonging; a space can be virtual as much as it can be physical. Such a space of social interaction is now being visualised on a facade, and at the same time, it is being fragmented: “To look at a room or a landscape, I must move my eyes around from one part to another. When I hear, however, I gather sound simultaneously from all directions at once: I am at the centre of my auditory world, which envelopes me. … You can immerse yourself in hearing, in sound. There is no way to immerse yourself similarly in sight”.[15] This is perhaps a prelude to augmented virtual reality.

As much as Genzken is interested in the ‘‘exploration of contradictions of urban life and its inherent potential for social change”, Rem Koolhaas shared a similar interest in his belief that it is not possible to live in this age if you don’t have a sense of many contradictory voices.[16] [17] What the two have in common is their continental European roots and a love for the Big Apple – Genzken titled her 1996 collage book “I Love New York, Crazy City”, and with it paid homage to her beloved city. Delirious New York was written at a time when New York was on the verge of bankruptcy, yet Koolhaas saw it as the Rosetta Stone, and analysed the city as if there had been a plan, with everything starting from a grid. It was Koolhaas’ conviction that the rigor of the grid enabled imagination, despite its authoritative nature: unlike Europe, which has many manifestos with no manifestation, New York was a city with a lot of manifestation without manifesto.
Koolhaas’ book was written with a sense of “critical paranoia” – a surrealist approach that blends together pre-existing conditions and illusions to map the many blocks of Manhattan into a literary montage. The cover of the first edition of the book, designed by Madelon Vriesendorp, perfectly captures the surrealism of the city’s socio-economy at the time: the Art Deco skyscraper Chrysler Building is in bed with the Empire State. Both structures were vying for distinction in the “Race into the Sky” of the 1920s, fueled by American optimism, a building boom, and speculative financing. [18] Just as the French writer Lautréamont wrote: “Beautiful as the accidental encounter, on a dissecting table, of a sewing machine and an umbrella”, surrealism is a paradigmatic shift of “a new type of surprising imagery replete with disguised sexual symbolism”[19] The architectural surrealism manifested in this delirious city is the chance encounter of capital, disguised as national symbolism – an architectural hook.
Data Architecture

Genzken’s sense of scale echoes Koolhaas’ piece on “bigness” in 1995. Her proposal for the Amsterdam City Gate frames and celebrates the empty space, and found manifestation in Koolhaas’ enormous China Central Television’s (CCTV) Beijing headquarters – a building as a city, an edifice of endless air-conditioning and information circularity wrapped in a structured window skin, hugging itself in the air by its downsampled geometry of a mobius loop. Just as Koolhaas pronounced, within a world that tends to the mega, “its subtext is f*** context”. One is strongly reminded of the big data approach to form-finding, perhaps also of the discrete spatial quality coming from Cellular Automata (CA), where the resolution of interconnections and information consensus fades into oblivion, turning data processing into an intelligent, ever mounting aggregation. In the big data–infused era, the scale boundary between architecture and urban design becomes obscured. This highlights our contemporary understanding of complex systems science, where the building is not an individual object, but part of a complex fabric of socioeconomic exchanges.

As Carpo captured in his Second Digital Turn, we are no longer living in Shannon’s age, where compression and bandwidth is of highest value: “As data storage, computational processing power, and retrieval costs diminish, many traditional technologies of data-compression are becoming obsolete … blunt information retrieval is increasingly, albeit often subliminally, replacing causality-driven, teleological historiography, and demoting all modern and traditional tools of story-building and story-telling. This major anthropological upheaval challenges our ancestral dependance on shared master-narratives of our cultures and histories”.[20] Although compression as a skillset is much used in the learning process of the machines for data models, from autoencoders to convolutional neural networks, trends in edge AI and federated learning are displacing value in bandwidth with promises of data privacy – we no longer surrender data to a central cloud, instead, all is kept on our local devices with only learnt models synchronising.
Such displacement of belief in centralised provisions to distributed ownership is reminiscent of the big data-driven objectivist approach to spatial design, which gradually displaces our faith in anything non-discursive, such as norms, cultures, and even religion. John Lagerwey defines religion in its broadest sense as the structuring of values.[21] What values are we circulating in a socio-economy of search engines and pay-per-clicks? Within trends of data distribution, are all modes of centrally-provisioned regulation and incentivisation an invasion of privacy? Genzken’s work in urbanity is like a mirror held up high for us to reflect on our urban beliefs.

Genzken began architecturing a series of “columns” around the same time as her publication of I Love New York, Crazy City. Evocative of skyscrapers and skylines that are out of scale, she named each column after one of her friends, and decorated them with individual designs, sometimes of newspapers, artefacts, and ready-made items that reflect the happenings of the time. Walking amongst them reminds the audience of New York’s avenues and its urban strata, but at 1:500. Decorated with DIY store supplies, these uniform yet individuated structures seem to be documenting a history of the future of mass customization. Mass customisation is the use of “flexible computer-aided manufacturing systems to produce custom output. Such systems combine the low unit costs of mass production processes with the flexibility of individual customization”.[22] As Carpo argued, mass customisation technologies would potentially make economies-of-scale and their marginal costs irrelevant and, subsequently, the division-of-labour unnecessary, as the chain of production would be greatly distributed.[23] The potential is to democratise the privilege of customised design, but how can we ensure that such technologies would benefit social goals, and not fall into the same traps of the attention economy and its consumerism?
Refracted and reflected in Genzken’s “Social Facades” – taped with ready-made nationalistic pallettes allusive of the semi-transparent curtain walls of corporate skyscrapers – one sees nothing but only a distorted image of the mirrored self. As the observer begins to raise their phone to take a picture of Genzken’s work, the self suddenly becomes the anomaly in this warped virtual space of heterotopia.
“Utopia is a place where everything is good; dystopia is a place where everything is bad; heterotopia is where things are different – that is, a collection whose members have few or no intelligible connections with one another.” — Walter Russell Mead [24]
Genzken’s heterotopia delineates how the “other” is differentiated via the images that have been consumed – a post-Fordist subjectivity that fulfils itself through accelerated information consumption.

The Algorithmic Form
Genzken’s engagement with and interest in architecture can be traced back to the 1970s, when she was in the middle of her dissertation at the academy.[25] She was interested in ellipses and hyperbolics, which she prefers to call “Hyperbolo”.[26] The 70s were a time when a computer was a machine that filled the whole room, and to which a normal person would not have access. Genzken got in touch with a physicist, computer scientist Ralph Krotz, who, in 1976, helped in the calculation of the ellipse with a computer, and plotted the draft of a drawing with a drum plotter that prints on continuous paper.[27] Artists saw the meaning in such algorithmic form differently than scientists. For Krotz, ellipses are conic sections. Colloquially speaking, an egg comes pretty close to an ellipsoid: it is composed of a hemisphere and half an ellipse. If we are to generalise the concept of conic section, hyperbolas also belong to it: if one rotates a hyperbola around an axis, a hyperboloid is formed. Here, the algorithmic form is being rationalised to its computational production, irrelevant of its semantics – that is, until it was physically produced and touched the ground of the cultural institution of a museum.
The 10-meter long ellipse drawing was delivered full size, in one piece, as a template to a carpenter, who then converted it to his own template for craftsmanship. Thus, 50 years ago, Genzken’s work explored the two levels of outsourcing structure symbolic of today’s digital architectural production. The output of such exploration is a visual-sculptural object of an algorithmic form at such an elongated scale and extreme proportion that it undermines not only human agency in its conception, but also the sensorial perception of 2D-3D space.[28] When contemplating Genzken’s Hyperbolo, one is often reminded of the radical play with vanishing points in Hans Holbein’s “The Ambassadors”, where the anamorphic skull can only be viewed at an oblique angle, a metaphor for the way one can begin to appreciate the transience of life only with an acute change of perspective.

When situated in a different context, next to Genzken’s aircraft windows (“Windows”), the Hyperbolo finds association with other streamlined objects, like missiles. Perhaps the question of life and death, paralleling scientific advancement, is a latent meaning and surrealist touch within Genzken’s work, revealing how the invention of the apparatus is, at the same time, the invention of its causal accidents. As the French cultural theorist and urbanist Paul Virilio puts it: the invention of the car is simultaneously the invention of the car crash.[29] We may be able to compute the car as a streamlined object, but we are not even close to being able to compute the car as a socio-cultural technology.

Social Architecture?
Perhaps the problem is not so much whether the “social” is computable, but rather that we are trying to objectively rationalise something that is intrinsically social. This is not to say that scientific methods to social architecture are in vain; rather the opposite, that science and its language should act as socioeconomic drivers to changes in architectural production. What is architecture? It can be described as what stands at the intersection of art and science – the art of the chief ‘arkhi-’ and the science of craft ‘tekton’ – but the chance encounter of the two gives birth to more than their bare sum. If architecture is neither art nor science but an emergence of its own faculty, it should be able to argue for itself academically as a discipline, with a language crafted as its own, and to debate itself on its own ground – beyond the commercial realm that touches base with ground constraints and reality of physical manifestation, and also in its unique way of researching and speculating, not all “heads in the clouds”, but in fact revealing pre-existing socioeconomic conditions.
It is only through understanding ourselves as a discipline that we can begin to really grasp ways of contributing to a social change, beyond endlessly feeding machines with data and hoping it will either validate or invalidate our ready-made and ear-catching hypothesis. As Carpo beautifully put it:
“Reasoning works just fine in plenty of cases. Computational simulation and optimization (today often enacted via even more sophisticated devices, like cellular automata or agent-based systems) are powerful, effective, and perfectly functional tools. Predicated as they are on the inner workings and logic of today’s computation, which they exploit in full, they allow us to expand the ambit of the physical stuff we make in many new and exciting ways. But while computers do not need theories, we do. We should not try to imitate the iterative methods of the computational toolds we use because we can never hope to replicate their speed. Hence the strategy I advocated in this book: each to its trade; let’s keep for us what we do best.” [30]

References
1 A. Bava, “Computational Tendencies – Architecture – e-Flux.” Computational Tendencies, January. 2020. https://www.e-flux.com/architecture/intelligence/310405/computational-tendencies/.
2 R. Bottazzi, Digital Architecture beyond Computers Fragments of a Cultural History of
Computational Design (London: Bloomsbury Visual Arts, 2020).
3 ASSRU, Algorithmic Social Sciences, http://www.assru.org/index.html. (Accessed December 18, 2021)
4 P. Schumacher, Design of Information Rich Environments, 2012.
https://www.patrikschumacher.com/Texts/Design%20of%20Information%20Rich%20Environments.html.
5 Oxford, “The Home of Language Data” Oxford Languages, https://languages.oup.com/ (Accessed December 18, 2021).
6 Google, “Algorithmic Information Theory – Google Arts & Culture”, Google,
https://artsandculture.google.com/entity/algorithmic-information-theory/m085cq_?hl=en. (Accessed December 18, 2021).
7 Britannica, “Sociology”, Encyclopædia Britannica, inc. https://www.britannica.com/topic/sociology. (Accessed December 18, 2021).
8 Etymonline, “Etymonline – Online Etymology Dictionary”, Etymology dictionary: Definition, meaning and word origins, https://www.etymonline.com/, (Accessed December 18, 2021).
9 J. Culler, Literary Theory: A Very Short Introduction, (Oxford: Oxford University Press, 1997).
10 K. Friston, ”The free-energy principle: a unified brain theory?“ Nature reviews neuroscience, 11 (2),127-138. (2010)
11 J. Covach, “Form in Rock Music: A Primer” (2005), in D. Stein (ed.), Engaging Music: Essays in Music Analysis. (New York: Oxford University Press), 71.
12 Jay-Z. Empire State Of Mind, (2009) Roc Nation, Atlantic
13 J. Derrida, The Ear of the Other: Otobiography, Transference, Translation ; Texts and Discussions with Jacques Derrida. Otobiographies / Jacques Derrida, (Lincoln, Neb.: Univ. of Nebraska Pr., 1985).
15 Kunsthalle Wien, “Kunsthalle Wien #FemaleFool Booklet I’m Isa Genzken the …,” (2014). https://kunsthallewien.at/101/wp-content/uploads/2020/01/booklet_i-m-isa-genzken-the-only-female-fool.pdf?x90478.
16 W. Ong, Orality and Literacy: The Technologizing of the Word, (London: Methuen, 1982)
17 R. Koolhaas, New York délire: Un Manifeste rétroactif Pour Manhattan, (Paris: Chêne, 1978).
18 Kunsthalle Wien, “Kunsthalle Wien #FemaleFool Booklet I’m Isa Genzken the …,” (2014). https://kunsthallewien.at/101/wp-content/uploads/2020/01/booklet_i-m-isa-genzken-the-only-female-fool.pdf?x90478.
19 J. Rasenberger, High Steel: The Daring Men Who Built the World’s Greatest Skyline, 1881 to the Present, (HarperCollins, 2009)
20 Tate, “’L’Enigme D’Isidore Ducasse’, Man Ray, 1920, Remade 1972”, Tate. https://www.tate.org.uk/art/artworks/man-ray-lenigme-disidore-ducasse-t07957, (Accessed December 18, 2021)
21 M. Carpo, ”Big Data and the End of History”. International Journal for Digital Art History, 3: Digital Space and Architecture, 3, 21 (2018)
22 J. Lagerwey, Paradigm Shifts in Early and Modern Chinese Religion a History, (Boston, Leiden: Brill, 2018).
23 Google, “Mass Customization – Google Arts & Culture.” Google, https://artsandculture.google.com/entity/mass-customization/m01k6c4?hl=en (Accessed December 18, 2021).
24 M. Carpo, The Second Digital Turn: Design beyond Intelligence, (Cambridge: MIT, 2017).
25 W.R. Mead, (Winter 1995–1996). “Trains, Planes, and Automobiles: The End of the Postmodern Moment”. World Policy Journal. 12 (4), 13–31
26 U. Loock, “Ellipsoide und Hyperboloide”, in Isa Genzken. Sesam, öffne dich!, exhibition cat. (Whitechapel Gallery, London, and Museum Ludwig, Cologne: Kasper, 2009)
27 S. Baier, “Out of sight”, in Isa Genzken – Works from 1973-1983, Kunstmuseum
28 R. Krotz, H. G. Bock, “Isa Genzken”, in exhibition cat. Documenta 7, Kassel 1982, vol. 1, p. 330-331, vol. 2, p. 128-129
29 A. Farquharson, “What Architecture Isn’t” in Alex Farquharson, Diedrich Diederichsen and Sabine Breitwieser, Isa Genzken (London 2006), 33
30 P. Virilio, Speed and Politics: An Essay on Dromology (New York: Columbia University, 1986).

25/10/2020
Part relationships play an important role in architecture. Whether an aspect of a Classical order, a harmonious joining of building components, a representation of space, a partition of spaces, or as a body that separates us and identifies us as individuals. From the very outset, every form of architecture begins with an idea of how parts come together to become a whole and an understanding of how this whole relates to other parts. Architecture first composes a space as a part of a partitioning process well before defining a purpose, and before using any geometry.
The sheer performance of today’s computational power makes it possible to form a world without a whole, without any third party or a third object. Ubiquitous computing fosters peer-to-peer or better part-to-part exchange. It is not surprising then that today’s sharing represents an unfamiliar kind of partiality. From distributive manufacturing to the Internet of Things, new concepts of sharing promise systematic shifts, from mass-customisation to mass-individualisation: the computational enabled participations are foundational. It is no longer the performance or mode of an algorithm that drives change but its participatory capacities. From counting links, to likes, to seats, to rooms: tools for sharing have become omnipresent in our everyday lives. Thus, that which is common is no longer negotiated but computed. New codes – not laws or ideologies – are transforming our cities at a rapid pace, but what kind of parthood is being described? How does one describe something only through its parts today? To what extent do the automated processes of sharing differ from the partitioning of physical space? How can we add, intervene and design such parts through architecture?
The relationship between parts and their whole is called Mereology. In this issue of Prospectives, mereology’s theories and the specifics of part-relations are explored. The differences between parts and the whole, the sharing of machines and their aesthetics, the differences between distributive and collective, their ethical commitments, and the possibilities of building mereologies are discussed in the included articles and interviews.
Just as mereology describes objects from their parts, this issue is partial. It is not a holistic proposal, but a collection of positions. Between philosophy, computation, ecology and architecture, the texts are reminders that mereologies have always been part of architecture. Mereology is broadly a domain that deals with compositional possibilities, relationships between parts. Such an umbrella – analogue to morphology, typology, or topology – is still missing in architecture. Design strategies that depart part-to-part or peer-to-peer are uncommon in architecture, also because there is (almost) no literature that explores these topics for architectural design. This issue hopes to make the extra-disciplinary knowledge of mereology accessible to architects and designers, but also wishes to identify links between distributive approaches in computation, cultural thought and built space.
The contributions gathered here were informed by research and discussions in the Bartlett Prospectives (B-Pro) at The Bartlett School of Architecture, UCL London from 2016 to 2019, culminating in an Open Seminar on mereologies which took place on 24 April 2019 as part of the Prospectives Lecture Series. The contributions are intended as a vehicle to inject foundational topics such as mereology into architectural design discourse.
The Contributions
This compilation starts with Giorgio Lando’s text “Mereology and Structure”. Lando introduces what mereology is for philosophers, and why philosophers discuss mereological theses, as well as disagree one with another about them. His text focuses in particular on the role of structure in mereology outlining that from a formal point of view part relations are freed from structure. He argues that independence from structure might be the identifying link between mereology and architecture. The second article “From Partitioning to Partaking” is a plea for re-thinking the city. Daniel Koehler’s essay points to the differences between virtual and real parts. Koehler observes a new spatial practice of virtual representations that render previous models of urban governance obsolete. He argues that the hyper-dimensional spaces of a big data-driven economy demand a shift from a partitioning practice of governance to more distributed forms of urban design. In “Matter versus Parts: The Immaterialist Basis of Architectural Part-Thinking” Jordi Vivaldi Piera highlights the revival of matter in parallel to the discrete turn in contemporary discourses on experimental architecture. The essay gravitates around the notion of part-thinking in association with the notion of form. Fluctuating between continuous and discrete, the text sets out requirements for radical part-thinking in architecture. As a computational sociologist, David Rozas illustrates the potential of decentralised technologies for democratic processes at the scale of neighborhood communities. After an introduction to models of distributed computation, “Affordances of Decentralised Technologies for Commons-based Governance of Shared Technical Infrastructure” draws analogies to Elinor Ostrom’s principles of commons governance and how those can be computationally translated, turning community governance into fully decentralised autonomous organisations.
Departing from the Corbusian notion of a ‘machine for living’, Sheghaf Abo Saleh defines a machine for thinking. In “When Architecture Thinks! Architectural Compositions as a Mode of Thinking in the Digital Age” Abo Saleh states that the tectonics of a machine that thinks is brutal and rough. As a computational dialogue, she shows how roughness can enable posthumanism which, in her case, turns “tempered” parts into a well-tempered environment. Ziming He’s entry point for “The Ultimate Parts” is the notion of form as the relations between parts and wholes. He’s essay sorts architectural history through a mereological analysis, proposing a new model of part-to-part without wholes. Shivang Bansal’s “Towards a Sympoietic Architecture: Codividual Sympoiesis as an Architectural Model” investigates the potential of sympoiesis. By extending Donna Haraway‘s argument of “tentacular thinking” into architecture, the text shifts focus from object-oriented thinking to parts. Bansal argues for the limits of autopoiesis as a system and conceptualises spatial expressions of sympoiesis as a necessity for an adaptive and networked existence through “continued complex interactions” among parts.
Merging aspects of ‘collective’ and ‘individuality,’ in “Codividual Architecture within Decentralised Autonomous System” Hao Chen Huang proposes a new spatial characteristic that she coins as the “codividual”. Through an architectural analysis of individual and shared building precedents, Huang identifies aspects of buildings that merge shared and private features into physical form. Anthony Alviraz’s paper “Computation Within Codividual Architecture” investigates the history and outlook of computational models into architecture. From discrete to distributed computation, Alviraz speculates on the implications of physical computation where physics interactions overcome the limits of automata thinking. In “Synthesizing Hyperumwelten”, Anna Galika transposes the eco-philosophical concept of an HyperObject into a “Hyperumwelt”. While the Hyperobject is a closed whole that cannot be altered, a Hyperumwelt is an open whole that uses objects as its parts. The multiple of a Hyperumwelt offers a shift from one object’s design towards the impact of multiple objects within an environment.
Challenging the notion of discreteness and parts, Peter Eisenman asks in the interview “Big Data and the End of Architecture Being Distant from Power” for a definition of the cultural role of the mereological project. Pointing to close readings of postmodern architecture that were accelerated by the digital project, Eisenman highlights that the demand for a close reading is distanced from the mainstream of power. The discussion asks: ultimately, what can an architecture of mereology critique? The works of Herman Hertzberger are an immense resource on part-thinking. In the interview “Friendly Architecture: In the Footsteps of Structuralism”, Herman Hertzberger explains his principle of accommodation. When building parts turn into accommodating devices, buildings turn into open systems for staging ambiguity.**
The issue concludes with a transcript from the round table discussion at the Mereologies Open Seminar at The Bartlett School of Architecture on 24 April 2019.
Acknowledgments
The contributions evolved within the framework of Bartlett Prospectives (B-Pro) at The Bartlett School of Architecture, UCL. I want to thank Frédéric Migayrou for his vision, commitment and long years of building up a research program, not only by architecture but through computation. I would like to thank Roberto Bottazzi for the years of co-organising the Prospectives Lecture Series, where plenty of the discussions that form the backbone of this issue took place. Thanks to Mario Carpo for raising the right question at the right time for so many people within the program, thanks to Andrew Porter for enabling so many events, to Gilles Retsin, for without the discrete there are no parts, Mollie Claypool for the editing and development of Prospectives journal, and Vera Buehlmann, Luciana Parisi, Alisa Andrasek, Keller Easterling, Matthew Fuller, John Frazer, Philippe Morel, Ludger Hovestadt, Emmanuelle Chiappone-Piriou, Jose Sanchez, Casey Rehm, Tyson Hosmer, and Jordi Vivaldi Piera for discussions and insights.
I want to thank Rasa Navasaityte, my partner in Research Cluster 17 at B-Pro, for driving the design research. Thank you for the research contributed by the researchers and tutors: Christoph Zimmel, Ziming He, Anqi Su, Sheghaf Abo Saleh, and to all participants, specifically to highlight: Genmao Li, Zixuan Wang, Chen Chen, Qiming Li, Anna Galika, Silu Meng, Ruohan Xu, Junyi Bai, Qiuru Pu, Anthony Alviraz, Shivang Bansal, Hao-Chen Huang, Dongxin Mei, Peiwen Zhan, Mengshi Fu, Ren Wang, Leyla El Sayed Hussein, Zhaoyue Zhang, Yao Chen, and Guangyan Zhu.
The issue includes articles that evolved from thesis reports conducted in the following clusters: Ziming He from Research Cluster 3 tutored by Tyson Hosmer, David Reeves, Octavian Gheorghiu, and Jordi Vivaldi in architecture theory. Sheghaf Abo Saleh, Anthony Alvidrez, Shivang Bansal, Anna Galika, Hao Chen Huang from Research Cluster 17 tutored by Daniel Koehler and Rasa Navasaityte. If not indicated directly, the featured images, graphics of this issue are by Daniel Koehler, 2020.

“One must turn the task of thinking into a mode of education in how to think.”[1]
These words from the philosopher Martin Heidegger point towards new modes of thinking. As architects, one can mention Mario Carpo’s remark about the huge amounts of data that are available for everyone nowadays: most of it is underused.[2] As this essay will argue, this new condition of Big Data, and the digital tools used to comprehend and utilise it, can trigger an entirely new way of thinking about architecture. It is a way to both open doors for testing, and an opportunity to look back into history and re-evaluate certain moments in new ways. As an example one can take Brutalism, which emerged as a post-war period solution in the 1950s. It was a new mode of thinking about architecture that was influenced by Le Corbusier’s Unité d’Habitation de Grandeur Conforme, Marseilles (1948–54), the Industrial Revolution and the age of the mechanical machine. Brutalism can be read as the representation of reading the building as a machine at that time. Luciana Parisi has expanded on this idea, writing that Brutalism can be considered as the start of thinking about architecture as a digital machine, having removed any notion of emotion from the architectural product, leaving a rough mass of materials and inhabitable structures.[3] In Parisi’s sense, brutal architecture can then be read as a discrete system of autonomous architectural parts brought together with a set of rules: symmetry, asymmetry, scales, proportions, harmony, etc. These rules, materials and structures act autonomously using collective behaviours to produce data. The data can then be translated into concrete compositional elements which form a building, a city or a whole territory. The adjacencies between each discrete compositional element creates the relations between those parts.

The Building Thinking Machine
The building as a machine departs from Le Corbusier’s claim for a functional architecture.[4] Today, the use of machine learning and artificial intelligence means that machines are no longer used only for making. They are thinking machines.[5] This allows a new translation of Le Corbusier’s understanding of function, asking the questions: what if architecture acts as a mode of thinking? How would a building as a thinking machine perform?
The generation following Le Corbusier progressed the building machine. Regner Banham linked the building machine to comfort and the environment,[6] seeing the building as a kit of tools that provide comfort. In other studies, Reyner Banham proposed the building as a package which is totally enclosed and isolated from the external environment, referring to this as “the environmental bubble”. He proposed that surrounding the building with one thick layer that protected the internal space was the best solution to provide a well-tempered environment. Yet Banham presents a clear separation between the interior and exterior spaces which no longer matches the complexity of interior-exterior relationships at both urban and architectural scales.
Mereological Reading of Architectural Precedents
Different types of systems that provide for a well-tempered environment inside the building distinguish difference between inside and outside as the difference between a well- and non-tempered environment. Mereology, or the study of parts-relations,[7] can be used as a methodology to read a building in terms of its compositional aspects.
One historical example is the Rasoulian House (1904) which was designed to provide a state of comfort for its users throughout the year. A basic architectural element known as the wind catcher tower, or Malqaf, provided the building with breeze. As Sarinaz Suleiman described, the Malqaf is a composition of architectural elements that work together to create air flow. These elements include walls, doors, rooms and include the basement and the courtyard, organised in a specific order, proportions and orientation to create specific relationships between the inside and the outside.[8]
The Malgaf is the first point at which air flow enters the building. It then travels down a shaft which is the first interior space that the wind interacts with. The air continues to a second interior space through a window-like opening into a room, and then is moved through an opening in the room’s floor to a cellar space under the building. This third interior space is the coolest space in the building. The cellar is connected to the courtyard through an opening that facilitates air circulation and absorbs wind. For this to happen, two kinds of relationships need to exist: the exterior relation formed by the geometry of two elements, e.g. the height of the Malqaf and the width of the courtyard which help to create a high difference in air pressure, and an internal relation which is controlled by the openings between the interior spaces and between interior and exterior spaces as well. Ventilation is not only a void space, but another level of interiority inside the building.

Another example of a complex ventilation system is a data centre building.[9] Data centres usually produce vast thermal exhaust which requires constant air movement, requiring large depths to ceilings and floors which may be as big as the building itself. Servers are positioned in the room with a certain distance between each other. This distance is related to the degree of temperature and the air circulation speed. Higher temperatures inside the room are used to decrease air pressure and create a pressure difference that enables air circulation naturally in the room. The path that the air travels allows the air the time it needs to cool down naturally.
Computational Ventilation
Hundreds of years ago, Vitruvius described wind, saying that “wind is a flowing wave of air with an excess of irregular movements. It is produced when heat collides with moisture, and the shock of the crash expels its force in a gust of air.”[10] Vitruvius’ definition can be deconstructed into two parts, the first of which deals with the dominant wind direction and its relation to the outer envelope of the building. This concept was emphasised by Vitruvius’ example of the Octagon Marble Tower (15th century BC). The second part relates to the process of creating wind flow in nature. Vitruvius explains that air circulation occurs when two different air pressures encounter each other. The difference in the air pressure always happens as a result of changes in temperature and moisture. High temperature heats up the air causing low density and consequently low pressure areas, and lower temperature will create a high pressure area. This concept is the logic that has been followed in all passive ventilation systems throughout history. These systems tend to create two points with a high difference in pressure, connecting these two points with a path that needs to be ventilated. This path would then move through the building accelerating air movement from the high pressure area to the low pressure area creating air flow inside the building.

A traditional building from the Middle East can be taken as a case study for applying thermodynamic logic to create natural air circulation in a building. In the previous example of Rasolian House, the side that is exposed to the sun is heated up by the sun. Consequently, air pressure decreases. The geometry that is exposed to the sun creates shadowed areas inside and outside of the building. These shadowed areas are much colder and have high air pressure. Air circulates from the high pressure to low pressure areas. That means air can move from a cooler courtyard to an upper space located above it. This air movement absorbs the air from inside the building to fill in the void in the courtyard that the high pressure air had left behind when it moves upwards. Due to the opening at the top of the shaft, air will enter the building to fill in the void that the inner air has left behind as well. This air replacement creates the air circulation inside the building. The creation of wind is dependent on the design of the inner space and its relation to the outer space through openings. This means that, by closing and allowing openings, wind can be created or stopped, and by changing some openings, the wind flow path can be changed, and wind speed could increase or decrease. This follows a logic of discrete, combinatorial air flow.

Computation Ventilation on the Urban and Architectural Scale
The building can be seen as a machine for creating an environmental condition through compositional thinking. This way of thinking turns the building, in the case of the Malgarf, into a switch that can turn the air flow on and off. In this instance, the creation of wind is entirely dependent on a series of elements that are well- organised and ordered. From this combinatorial thinking, wind can be read as a form of pre-digital computation considering the inside-outside sequences as what causes the air flow.
The order of inside-outside also plays an important role in disrupting air flow. A single element that has been extracted from a building can serve as an example. It is a corridor, but at the same time this element plays a crucial role in creating wind. The way that the walls are arranged produces a contrast between the inside-outside spaces. Moreover, the design and arrangement of the openings turns the corridor into a path for air. Taking this element as a discrete part, and rearranging its parts within the same local rules that have been set over the ventilation logic, another version of the element emerges. Following this same logic would give different versions of different elements. Further on, each version of each element has its discreteness and can be upscaled. With this upscaling strategy, more complex interiors emerge.

By integrating an environmental aspect within the design process, a new type of building that embraces another wind geometry can be created. This provides an opportunity to design highly dense architectural forms that can reassure the qualities of the internal space. By nesting interiors one can create different low and high pressure areas over inside-outside sequences.

This allows a rethinking of the inside-outside arrangement in the city according to what positive or negative sequences are created. For example, for more similar interiors less contrast in air pressure needs to be produced. For more variation between the interiors, the contrast in the air pressure needs to increase and more air will flow. An air circulation concept can be used as a means to arrange both interior and exterior spaces in the building and in the city.

Achieving Banham’s Campfire
At an architectural scale the interior-exterior relation can also be managed by the building façade. The façade tends to be used to provide separation between indoor and outdoor spaces as well as between a tempered and non-tempered environment in order to achieve comfort. However, a new understanding of wind circulation can provide a well-tempered environment regardless of the façade. In other words, façade here can be seen as the tools or the elements that provide comfort and facilitate air circulation inside the building.
A façade needs to meet specific criteria in order to generate a difference in air pressure just like the inside-outside arrangement in the city scale. Three design parameters can support this: the orientation of the elevation in relation to the sun, the number of layers that are needed to create more or less tempered areas and the degree of translucency of the façade that helps to prevent or allow sunlight which helps in its turn to reach the preferred temperature. The facade is not any more the envelope of the building, it is the layers that are responsible for providing the comfort inside the building.

Indeed, thinking about architecture through architecture’s interiors can expose low-tech computation that starts from a thermodynamic discreteness. This enables the understanding of spatial sequence that can support different levels of space in a building and the notion of layers of building-in-buildings. If this concept is upscaled to the scale of the city it could be an opportunity to study the kinds of patterns that mereology can create utilising environmental thinking. This means that a building, or even a city, could become an example of the campfire that Banham aimed to reach many years ago.[11]


[1] M. Heidegger, The End of Philosophy, trans. Joan Stambaugh (Cambridge University Press, 2003).
[2] M. Carpo, The Second Digital Turn: Design Beyond Intelligence, (Cambridge, Massachusetts: MIT Press, 2017).
[3] L. Parisi, “Reprogramming Decisionism,” e-flux, 85 (2017).
[4] Le Corbusier, “Eyes That Do Not See” in Towards a New Architecture, (London: The Architectural Press, 1927), 107.
[5] M. Carpo, “Excessive Resolution: Artificial Intelligence and Machine Learning in Architectural Design,” Journal of Architectural Record (2018), https://www.architecturalrecord.com/articles/13465-excessive-resolution-artificial-intelligence-and-machine-learning-in-architectural-design, last accessed 3 May 2019.
[6] R. Banham, “Machines A habiter”, The Architecture of the Well-tempered Environment, (Chicago: Chicago Press, 1969).
[7] A. Varzi, “Mereology Then and Now”, Journal of Logic and Logical Philosophy, 24 (2015), 409-427.
[8] S. Suleiman, “Direct comfort ventilation: Wisdom of the past and technology of the future (wind-catcher),” Journal of Sustainable Cities and Society, 5, 1 (2012 ), 8-15.
[9] M. de Jong, “Air Circulation in Data Centres: rethinking your design” , Data Centre Knowledge, (2014), http://www.datacenterknowledge.com/archives/2014/11/27/air-circulation-in-data-centers-rethinking-your-design, last accessed 5 May 2019.
[10] M. P. Vitruvius, “First Principles and The Layout of Cities,” Ten Books on Architecture, ed. Ingrid D. Rowland (Cambridge: Cambridge University Press, 1999), 21-32.
[11] R. Banham, “The kit of parts: heat and light,” The Architecture of the Well-tempered Environment (Chicago: Chicago Press, 1969).

Parts, chunks, stacks and aggregates are the bits of computational architecture today. Why do mereologies – or buildings designed from part-to-whole – matter? All too classical, the roughness of parts seems nostalgic for a project of the digital that aims for dissolving building parts towards a virtual whole. Yet if parts shrink down to computable particles and matter, and there exists a hyper-resolution of a close to an infinite number of building parts, architecture would dissolve its boundaries and the capacity to frame social encounters. Within fluidity, and without the capacity to separate, architecture would not be an instrument of control. Ultimately, freed from matter, the virtual would transcend from the real and form finally would be dead. Therein is the prospect of a fluid, virtual whole.
The Claustrophobia of a City that Transcends its Architecture
In the acceleration from Data to Big Data, cities have become more and more virtual. Massive databases have liquefied urban form. Virtual communication today plays freely across the material boundaries of our cities. In its most rudimentary form virtuality is within the digital transactions of numbers, interests and rents. Until a few years ago, financial investments in architectural form were equatable according to size and audience, e.g. as owner-occupied flats, as privately rented houses or as lease holding.[1] Today capital flows freely scatter across the city at the scale of the single luxury apartment. Beyond a certain threshold in computational access, data becomes big. By computing aggregated phone signal patterns or geotagged posts, virtual cities can emerge from the traces of individuals. These hyperlocal patterns are more representative of a city than its physical twin. Until recently, architecture staged the urban through shared physical forms: the sidewalk, lane or boulevard. Adjacent to cars, walkable for pedestrians or together as citizens, each form of being urban included an ideology of a commons, and grounded with that particular parts of encountering.

In contrast, a hyper-local urban transcends lanes and sidewalks. Detached from the architecture of the city, with no belonging left, urban speculation has withdrawn into the private sphere. Today, urban value is estimated by counting private belongings only, with claustrophobic consequences. An apartment that is speculatively invested displaces residents. The housing shortage in the big cities today is not so much a problem of lack of housing, but instead of vacant space, accessible not to residents but to interests they hold in the hyper-urban.[2] The profit from rent and use of space itself is marginal compared to the profit an embodied urban speculation adds to the property. The possibility of mapping every single home as data not only adds interest, like a pension to a home but literally turns a home into a pension.[3] However this is not for its residents but for those with access to resources. Currently, computing Big Data expands and optimises stakeholders’ portfolios by identifying undervalued building assets.[4] However, the notion of ‘undervalued’ is not an accurate representation of assets.
Hyper-localities increase real estate’s value in terms of how their inhabitants thrive in a neighbourhood through their encounters with one another and their surrounding architecture. The residents themselves then unknowingly produce extra value. The undervaluing of an asset is the product of its residents, and like housework, is unpaid labour. In terms of the exchange of capital, additional revenue from a property is usually paid out as a return to the shareholders who invested in its value. Putting big data-driven real estate into that equation would then mean that they would have to pay revenues to their residents. If properties create surplus value from the data generated by their residents, then property without its residents has less worth and is indeed over-, but not under-, valued.

The city uses vehicles for creating public revenue by governing the width of a street’s section or the height of a building. Architecture’s role was to provide a stage for that revenue to be created. For example the Seagram Building (van der Rohe, Johnson, 1958) created a “public” plaza by setting back its envelope in exchange for a little extra height. By limiting form, architecture could create space for not only one voice, but many voices. Today, however, the city’s new parameters hidden in the fluidity of digital traces cannot be governed by the boundaries of architecture anymore. Outlined already 40 years ago, when the personal computer became available, Gilles Deleuze forecasted that “Man is not anymore man enclosed”.[5] At that time, and written as a “Postscript on the Societies of Control”, the fluid modulation of space prospected a desirable proposition. By liquefying enclosures, the framework of the disciplinary societies of Foucault’s writings would disappear. In modern industrial societies, Deleuze writes, enclosures were moulds for casting distinct environments, and in these vessels, individuals became masses of the mass society.[6] For example, inside a factory, individuals were cast as workers, inside schools as students. Man without a cast and without an enclosure seemed to be freed from class and struggle. The freedom of an individual was interlinked with their transcendence from physical enclosures.

During the last forty years, the relation between a single individual and the interior framed architecture rightly aimed to dissolve the institutional forms of enclosures that represented social exclusion at their exterior. Yet, in this ambition alternative forms for the plural condition of what it means to be part of a city were not developed. Reading Deleuze further, a state without enclosures also does not put an end to history. The enclosures of control dissolve only to be replaced. Capitalism would shift to another mode of production. When industrial exchange bought raw materials and sold finished products, now it would buy the finished products and profit from the assemblies of those parts. The enclosure is then exchanged with codes that mark access to information. Individuals would not be moulded into masses but considered as individuals: accessed as data, divided into proper parts for markets, “counted by a computer that tracks each person’s position enabling universal modulation.”[7] Forty years in, Deleuze’s postscript has become the screenplay for today’s reality.
Hyper-parts: Spatial Practices of representations
A house is no longer just a neutral space, an enclosing interior where value is created, realised and shared. A home is the product of social labour; it is itself the object of production and, consequently, the creation of surplus value. By shifting from enclosure to asset, the big data-driven economy has also replaced the project behind modernism: humanism. Architecture today is post-human. As Rosi Braidotti writes, “what constitutes capital value today is the informational power of living matter itself”.[8] The human being as a whole is displaced from the centre of architecture. Only parts of it, such as its “immanent capacities to form surplus-value”, are parts of a larger aggregation of architecture. Beyond the human, the Hyper-city transcends the humane. A virtual city is freed from its institutions and constituent forms of governance. Economists such as Thomas Piketty describe in painstaking detail how data-driven financial flows undermine common processes of governance, whether urban, regional, or national, in both speed and scale. Their analysis shows that property transactions shelled in virtual value-creation-bonds are opaque to taxation. Transcending regulatory forms of governance, one can observe the increase of inequalities on a global scale. Comparable to the extreme wealth accumulation at the end of the nineteenth century, Piketty identifies similar neo-proprietarian conditions today, seeing the economy shifting into a new state he coins as “hypercapitalism”.[9] From Timothy Morton’s “hyper-objects” to hypercapitalism, hyper replaces the Kantian notion of transcendence. It expresses not the absorption of objects into humanism, but its withdrawal. In contrast to transcendence, which subordinates things to man’s will, the hyper accentuates the despair of the partial worlds of parts – in the case of Morton in a given object and in the case of Piketty in a constructed ecology.
When a fully automated architecture emerged, objects oriented towards themselves, and non-human programs began to refuse the organs of the human body. Just as the proportions of a data center are no longer walkable, the human eye can no longer look out of a plus-energy window, because it tempers the house, but not its user. These moments are hyper-parts: when objects no longer transcend into the virtual but despair in physical space. More and more, with increasing computational performance, following the acronym O2O (from online to offline),[10] virtual value machines articulate physical space. Hyper-parts place spatial requirements. A prominent example is Katerra, the unicorn start-up promising to take over building construction using full automation. In its first year of running factories, Katerra advertises that it will build 125,000 mid-rise units in the United States alone. If this occurred, Katerra would take around 30% of the mid-rise construction market in the company’s local area. Yet its building platform consists of only twelve apartment types. Katerra may see the physical homogeneity as an enormous advantage as it increases the sustainability of its projects. This choice facilitates financial speculation, as the repetition of similar flats reduces the number of factors in the valuing of apartments and allows quicker monetary exchange, freed from many variables. Sustainability refers not to any materiality but to the predictability of its investments. Variability is still desired, but oriented towards finance and not to inhabitants. Beyond the financialisation of a home, digital value machines create their own realities purely through the practice of virtual operations.

Here one encounters a new type of spatial production: the spatial practice of representations. At the beginning of what was referred to as “late capitalism”, the sociologist and philosopher Henri Lefebvre proposed three spatialities which described modes of exchange through capitalism.[11] The first mode, a spatial practice referred to a premodern condition, which by the use of analogies interlinked objects without any forms of representation—the second, representations of space linked directly to production, the organic schemes of modernism. The third representational spaces express the conscious trade with representations, the politics of postmodernism, and their interest in virtual ideas above the pure value of production. Though not limited to three only, Lefebvre’s intention was to describe capitalism as “an indefinite multitude of spaces, each one piled upon, or perhaps contained within, the next”.[12] Lefebvre differentiated the stages in terms of their spatial abstraction. Incrementally, virtual practices transcended from real-to-real to virtual-to-real to virtual-to-virtual. But today, decoupled from the real, a virtual economy computes physically within spatial practices of representations. Closing the loop, the real-virtual-real, or new hyper-parts, do not subordinate the physical into a virtual representation, instead, the virtual representation itself acts in physical space.
This reverses the intention of modernism orientated towards an organic architecture by representing the organic relationships of nature in geometric thought. The organicism of today’s hypercomputation projects geometric axioms at an organic resolution. What was once a representation and a geometry distant from human activity, now controls the preservation of financial predictability.
The Inequalities Between the Parts of the Virtual and the Parts of the Real
Beyond the human body, this new spatial practice of virtual parts today transcends the digital project that was limited to a sensorial interaction of space. This earlier understanding of the digital project reduced human activity to organic reflexes only, thus depriving architecture of the possibility of higher forms of reflection, thought and criticism. Often argued through links to phenomenology and Gestalt theory, the simplification of architectural form to sensual perception has little to do with phenomenology itself. Edmund Husserl, arguably the first phenomenologist, begins his work with considering the perception of objects, not as an end, but to examine the modes of human thinking. Examining the logical investigations, Husserl shows that thought can build a relation to an object only after having classified it, and therefore, partitioned it. By observing an object before considering its meaning, one classifies an object, which means identifying it as a whole. Closer observations recursively partition objects into more unaffected parts, which again can be classified as different wholes.[13] Husserl places parts before both thought and meaning.

Derived from aesthetic observations, Husserl’s mereology was the basisof his ethics, and was therefore concluded in societal conceptions. In his later work, Husserl’s analysis is an early critique of the modern sciences.[14] For Husserl, in their efforts to grasp the world objectively, the sciences have lost their role in enquiring into the meaning of life. In a double tragedy, the sciences also alienated human beings from the world. Husserl thus urged the sciences to recall that they ground their origins in the human condition, as for Husserl humanism was ultimately trapped in distancing itself further from reality.
One hundred years later, Husserl’s projections resonate in “speculative realism”. Coined By Levi Bryant as “strange mereology”,[15] objects, their belongings, and inclusions are increasingly strange to us. The term “strange” stages the surprise that one is only left with speculative access. However, ten years in, speculation is not distant anymore. That which transcends does not only lurk in the physical realm. Hyper-parts figurate ordinary scales today, namely housing, and by this transcend the human(e) occupation.
Virtual and physical space are compositionally comparable. They both consist of the same number of parts, yet they do not. If physical elements belong to a whole, then they are also part of that to which their whole belongs. In less abstract terms, if a room is part of an apartment, the room is also part of the building to which the apartment belongs. Materially bound part relationships are always transitive, hierarchically nested within each other. In virtual space and the mathematical models with which computers are structured today, elements can be included within several independent entities. A room can be part of an apartment, but it can also be part of a rental contract for an embassy. A room is then also part of a house in the country in which the house is located. But as part of an embassy, the room is at the same time part of a geographically different country on an entirely different continent than the building that houses the embassy. Thus, for example, Julian Assange, rather than boarding a plane, only needed to enter a door on a street in London to land in Ecuador. Just with a little set theory, in the virtual space of law, one can override the theory of relativity with ease.
Parts are not equal. Physical parts belong to their physical wholes, whereas virtual parts can be included in physical parts but don’t necessarily belong to their wholes. Far more parts can be included in a virtual whole than parts that can belong to a real whole. When the philosopher Timothy Morton says “the whole is always less than the sum of its parts”,[16] he reflects the cultural awareness that reality breaks due to asymmetries between the virtual and the real. A science that sets out to imitate the world is constructing its own. The distance which Husserl spoke of is not a relative distance between a strange object and its observer, but a mereological distance, when two wholes distance each other because they consist of different parts. In its effort to reconstruct the world in ever higher resolution, modernism, and in its extension the digital project, has overlooked the issue that the relationship between the virtual and the real is not a dialogue. In a play of dialectics between thought and built environment, modernism understood design as a dialogue. In extending modern thought, the digital project has sought to fulfill the promise of performance, that a safe future could be calculated and pre-simulated in a parallel, parametric space. Parametricism, and more generally what is understood as digital architecture, stands not only for algorithms, bits, and rams but for the far more fundamental belief that in a virtual space, one can rebuild reality. However, with each resolution that science seeks to mimic the world, the more parts it adds to it.

The Poiesis of a Virtual Whole
The asymmetry between physical and virtual parts is rooted in Western classicism. In early classical sciences, Aristotle divided thinking into the trinity of practical action, observational theory and designing poiesis. Since the division in Aristotle’s Nicomachean Ethics, design is a part of thought and not part of objects. Design is thus a knowledge, literally something that must first be thought. Extending this contradiction to the real object, design is not even concerned with practice, with the actions of making or using, but with the metalogic of these actions, the in-between between the actions themselves, or the art of dividing an object into a chain of steps with which it can be created. In this definition, design does not mean to anticipate activities through the properties of an object (function), nor to observe its properties (materiality), but through the art of partitioning, structuring and organising an object in such a way that it can be manufactured, reproduced and traded.
To illustrate poiesis, Aristotle made use of architecture.[17] No other discipline exposes the poetic gap so greatly between theory, activity and making. Architecture first deals with the coordination of the construction of buildings. As the architecture historian Mario Carpo outlines in detail, revived interest in classicism and the humanistic discourse on architecture began in the Renaissance with Alberti’s treatise: a manual that defines built space, and ideas about it solely through word. Once thought and coded into words, the alphabet enabled the architect to physically distance from the building site and the built object.[18] Architecture as a discipline then does not start with buildings, but with the first instructions written by architects used to delegate the building.
A building is then anticipated by a virtual whole that enables one to subordinate its parts. This is what we usually refer to as architecture: a set of ideas that preempt the buildings they comprehend. The role of the architect is to imagine a virtual whole drawn as a diagram, sketch, structure, model or any kind of representation that connotates the axes of symmetries and transformations necessary to derive a sufficient number of parts from it. Architectural skill is then valued by the coherence between the virtual and the real, the whole and its parts, the intention and the executed building. Today’s discourse on architecture is the surplus of an idea. You might call it the autopoiesis of architecture – or merely a virtual reality. Discourse on architecture is a commentary on the real.

Partitioning Architectures
From the very outset, architecture distanced itself from the building, yet also aimed to represent reality. Virtual codes were never autonomous from instruments of production. The alphabet and the technology of the printing press allowed Alberti to describe a whole ensemble distinct from a real building. Coded in writing, printing allowed for the theoretically infinite copies of an original design. Over time, the matrices of letters became the moulds of the modern production lines. However, as Mario Carpo points out, the principle remained the same.[19] Any medium that incorporates and duplicates an original idea is more architecture than the built environment itself. Belonging to a mould, innovation in architecture research could be valued in two ways. Quantitatively, in its capacity to partition a building in increasing resolution. Qualitatively, in its capacity to represent a variety of contents with the same form. By this, architecture faced the dilemma that one would have to design a reproducible standard that could partition as many different forms as possible to build non-standard figurations.[20]
The dilemma of the non-standard standard moulds is found in Sebastiano Serlio’s transcription of Alberti’s codes into drawings. In the first book of his treatise, Serlio introduces a descriptive geometry to reproduce any contour and shape of a given object through a sequence of rectangles.[21] For Serlio, the skill of the architect is to simplify the given world of shapes further until rectangles become squares. The reduction finally enables the representation of physical reality in architectural space using an additive assembly of either empty or full cubes. By building a parallel space of cubes, architecture can be partitioned into a reproducible code. In Serlio’s case, architecture could be coded through a set of proportional ratios. However, from that moment on, stairs do not consist only of steps, and have to be built with invisible squares and cubes too.
Today, Serlio’s architectural cubes are rendered obsolete by 3D printed sand. By shrinking parts to the size of a particle of dust, any imaginable shape can be approximated by adding one kind of part only. 3D printing offers a non-standard standard, and with this, five hundred years of architectural development comes to an end.

Replicating: A Spatial Practice of Representations
3D printing dissolved existing partitioning parts to particles and dust. A 3D-printer can not only print any shape but can also print at any place, at any time. The development of 3D printing was mainly driven by DIY hobbyists in the Open Source area. One of the pioneering projects here is the RepRap project, initiated by Adrian Bowyer.[22] RepRap is short for replicating rapid prototyping machine. The idea behind it is that if you can print any kind of objects, you can also print the parts of the machine itself. This breaks with the production methods of the Modern Age. Since the Renaissance, designers have crafted originals and used these to build a mould from those so that they can print as many copies as possible. This also explains the economic valuation of the original and why authorship is so vehemently protected in legal terms. Since Alberti’s renunciation of drawings for a more accurate production of his original idea through textual encoding, the value of an architectural work consisted primarily in the coherence of a representation with a building: a play of virtual and real. Consequently, an original representation that cast a building was more valued than its physical presentation. Architecture design was oriented to reduce the amount of information needed to cast. This top-down compositional thinking of original and copy becomes obsolete with the idea of replication.
Since the invention of the printing press, the framework of how things are produced has not changed significantly. However, with a book press, you can press a book, but with a book, you can’t press a book. Yet with a 3D printer, you can print a printer. A 3D printer does not print copies of an original, not even in endless variations, but replicates objects. The produced objects are not duplicates because they are not imprints that would be of lower quality. Printed objects are replicas, objects with the same, similar, or even additional characteristics as their replicator.

A 3D printer is a groundbreaking digital object because it manifests the foundational principle of the digital – replication – on the scale of architecture. The autonomy of the digital is based not only on the difference between 0 and 1 but on the differences in their sequencing. In mathematics in the 1930s, the modernist project of a formal mimicry of reality collapsed through Godel’s proof of the necessary incompleteness of all formal systems. Mathematicians then understood that perhaps far more precious knowledge could be gained if we could only learn to distance ourselves from its production. The circle of scientists around John von Neumann, who developed the basis of today’s computation, departed from one of the smallest capabilities in biology: to reproduce. Bits, as a concatenation of simple building blocks and the integrated possibility of replication, made it possible, just by sequencing links, to build first logical operations, and connecting those programs to today’s artificial networks.[23] Artificial intelligence is artificial but it is also alive intelligence.
To this day, computerialisation, not computation is at work in architecture. By pursuing the modern project of reconstructing the world as completely as possible, the digital project computerised a projective cast[24] in high resolution. Yet this was done without transferring the fundamental principles of interlinking and replication to the dimensions of the built space.

From Partitioning to Partaking
The printing press depends on a mould to duplicate objects. The original mould was far more expensive to manufacture than its copies, so the casting of objects had to bundle available resources. This required high investments in order to start production, leading to an increasing centralisation of resources in order to scale the mass-fabrication of standard objects for production on an assembly line. Contrarily, digital objects do not need a mould. Self-replication provided by 3D printing means that resources do not have to be centralised. In this, digital production shifts to distributed manufacturing.[25]
Independent from any mould, digital objects as programs reproduce themselves seamlessly at zero marginal costs.[26] As computation progresses, a copy will then have less and less value. Books, music and films fill fewer and fewer shelves because it no longer has value to own a copy when they are ubiquitously available online. And the internet does not copy; it links. Although not fully yet integrated into its current TCP-IP protocol,[27] the basic premise of hyperlinking is that linked data adds value.[28] Links refer to new content, further readings, etc. With a close to infinite possibility to self-reproduce, the number of objects that can be delegated and repeated becomes meaningless. What then counts is hyper-, is the difference in kind between data, programs and, eventually, building parts. In his identification of the formal foundations of computation, the mathematician Nelson Goodman pointed out that beyond a specific performance of computation, difference, and thus value, can only be generated when a new part is added to the fusion of parts.[29] What is essential for machine intelligence is the dimensionality of its models, e.g., the number of its parts. Big data refers less to the amount of data, but more to the number of dimensions of data.[30]

With increasing computation, architecture shifted from an aesthetic of smoothness that celebrated the mastership of an infinite number of building parts to roughness. Roughness demands to be thought (brute). The architecture historian Mario Carpo is right to frame this as nostalgic, as “digital brutalism”.[31] Similar to brutalism that wanted to stimulate thought, digital roughness aims to extend spatial computability, the capability to extend thinking, and the architecture of a computational hyper-dimensionality. Automated intelligent machines can accomplish singular goals but are alien to common reasoning. Limited around a ratio of a reality, a dimension, a filter, or a perspective, machines obtain partial realities only. Taking them whole excludes those who are not yet included and that which can’t be divided: it is the absolute of being human(e).
A whole economy evolved from the partial particularity of automated assets ahead of the architectural discipline. It would be a mistake to understand the ‘sharing’ of the sharing economy as having something “in common”. On the contrary, computational “sharing” does not partition a common use, but enables access to multiple, complementary value systems in parallel.

Cities now behave more and more like computers. Buildings are increasingly automated. They use fewer materials and can be built in a shorter time, at lower costs. More buildings are being built than ever before, but fewer people can afford to live in them. The current housing crisis has unveiled that buildings no longer necessarily need to house humans or objects. Smart homes can optimise material, airflow, temperature or profit, but they are blind to the trivial.

It is a mistake to compute buildings as though they are repositories or enclosures, no matter how fine-grain their resolution is. The value of a building is no longer derived only from the amount of rent for a slot of space, but from its capacities to partake with. By this, the core function of a building changes from inhabitation to participation. Buildings do not anymore frame and contain: they bind, blend, bond, brace, catch, chain, chunk, clamp, clasp, cleave, clench, clinch, clutch, cohere, combine, compose, connect, embrace, fasten, federate, fix, flap, fuse, glue, grip, gum, handle, hold, hook, hug, integrate, interlace, interlock, intermingle, interweave, involve, jam, join, keep, kink, lap, lock, mat, merge, mesh, mingle, overlay, palm, perplex, shingle, stick, stitch, tangle, tie, unit, weld, wield, and wring.
In daily practice, BIM models do not highlight resolution but linkages, integration and collaboration. With further computation, distributed manufacturing, automated design, smart contracts and distributed ledgers, building parts will literally compute the Internet of Things and eventually our built environment, peer-to-peer, or better, part-to-part – via the distributive relationships between their parts. For the Internet of Things, what else should be its hubs besides buildings? Part-to-part habitats can shape values through an ecology of linkages, through a forest of participatory capacities. So, what if we can participate in the capacities of a house? What if we no longer have to place every brick, if we no longer have to delegate structures, but rather let parts follow their paths and take their own decisions, and let them participate amongst us together in architecture?


[1] S. Kostof, The City Assembled: The Elements of Urban Form Through History (Boston: Little, Brown and Company, 1992).
[2] J. Aspen, "Oslo – the triumph of zombie urbanism." Edward Robbins, ed., Shaping the city, (New York: Routledge, 2004).
[3] The World Bank actively promotes housing as an investment opportunity for pension funds, see: The World Bank Group, Housing finance: Investment opportunities for pension funds (Washington: The World Bank Group, 2018).
[4] G. M. Asaftei, S. Doshi, J. Means, S. Aditya, “Getting ahead of the market: How big data is transforming real estate”, McKinsey and Company (2018).
[5] G. Deleuze, “Postscript on the societies of control,” October, 59: 3–7 (1992), 6.
[6] Ibid, 4.
[7] Ibid, 6.
[8] R. Braidotti, Posthuman Knowledge (Medford, Mass: Polity, 2019).
[9] T. Piketty, Capital and Ideology (Cambridge, Mass: Harvard University Press, 2020).
[10] A. McAfee, E. Brynjolfsson, Machine, platform, crowd: Harnessing our digital future (New York: W.W. Norton & Company, 2017).
[11] H. Lefebvre, The Production of Space (Oxford: Basil Blackwell, 1991), 33.
[12] Ibid, 8.
[13] E. Husserl, Logische Untersuchungen: Zweiter Teil Untersuchungen zur Phänomenologie und Theorie der Erkenntnis.trans. "Logical investigations: Part Two Investigations into the phenomenology and theory of knowledge" (Halle an der Saale: Max Niemeyer, 1901).
[14] E. Husserl, Cartesianische Meditationen und Pariser Vortraege. trans. "Cartesian meditations and Parisian lectures" (Haag: Martinus Nijhoff, Husserliana edition, 1950).
[15] L. Bryant, The Democracy of Objects (Ann Arbor: University of Michigan Library, 2011).
[16] T. Morton, Being Ecological (London: Penguin Books Limited, 2018), 93.
[17] Aristotle, Nicomachean Ethics 14, 1139 a 5-10.
[18] M. Carpo, Architecture in the Age of Printing (Cambridge, Mass: MIT Press, 2001).
[19] M. Carpo, The Alphabet and the Algorithm (Cambridge, Mass: MIT Press, 2011).
[20] F. Migayrou, Architectures non standard (Editions du Centre Pompidou, Paris, 2003).
[21] S. Serlio, V. Hart, P. Hicks, Sebastiano Serlio on architecture (New Haven and London: Yale University Press, 1996).
[22] R. Jones, P. Haufe, E. Sells, I. Pejman, O. Vik, C. Palmer, A. Bowyer, “RepRap – the Replicating Rapid Prototyper,” Robotica 29, 1 (2011), 177–91.
[23] A. W. Burks, Von Neumann's self-reproducing automata: Technical Report (Ann Arbor: The University of Michigan, 1969).
[24] R. Evans, The Projective Cast: Architecture and Its Three Geometries (Cambridge, Massachusetts: MIT Press, 1995).
[25] N. Gershenfeld, “How to make almost anything: The digital fabrication revolution,” Foreign Affairs, 91 (2012), 43–57.
[26] J. Rifkin. The Zero Marginal Cost Society: The Internet of Things, the Collaborative Commons, and the Eclipse of Capitalism (New York: Palgrave Macmillan, 2014).
[27] B. Bratton, The Stack: On Software and Sovereignty (Cambridge, Massachusetts: MIT Press, 2016).
[28] J. Lanier, Who Owns the Future? (New York: Simon and Schuster, 2013).
[29] N. Goodman, H. S. Leonard, “The calculus of individuals and its uses,” The Journal of Symbolic Logic, 5, 2 (1940), 45–55.
[30] P. Domingos, The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World (London: Penguin Books, 2015).
[31] M. Carpo, “Rise of the Machines,” Artforum, 3 (2020).

jordivivaldipiera@gmail.com
“Digital Matter”; “Intelligent Matter”’; “Behavioural Matter”; “Informed Matter”; “Living Matter”, “Feeling Matter”; “Vibrant Matter”; “Mediated Matter”; “Responsive Matter”; “Robotic Matter”; “Self-Organised Matter”; “Ecological Matter”; “Programmable Matter”; “Active Matter”; “Energetic Matter”. There is no term enjoying better reputation in today’s experimental architectural discourse. Gently provided by a myriad of studios hosted in pioneer universities around the world, the previous expressions illustrate the redemption of a notion that has traditionally been dazzled by form’s radiance. After centuries of irrelevance, “Matter” has recently become a decisive term; it illuminates not just the field of experimental architecture, but the whole spectrum of our cultural landscape: several streams in philosophy, art and science have vigorously embraced it, operating under the gravitational field of its holistic and non-binary constitution.
However, another Copernican Revolution is flipping today’s experimental academic architecture from a different flank. In parallel to matter’s redemption and after the labyrinthic continuums characteristic of the ’90s, discreteness claims to be the core of a new formal paradigm. Beside its Promethean vocation and renewed cosmetics, the discrete design model restores the relevance of a term that traditionally has been fundamental in architecture: the notion of part. However, in opposition to previous architectural modulations, part’s current celebration is traversed by a Faustian desire for spatial and ontological agency, which severely precludes any reverential servitude to its whole.
The singular coincidence of matter’s revival on the one side and the discrete turn on the other opens a debate in relation to its possible conflicts and compatibilities in the field of experimental architecture. In this essay, the discussion gravitates around one single statement: the impossibility of a materialist architectural part-thinking. The argument unfolds by approaching a set of questions and analysing the consequences of its possible answers: how matter’s revival contributes to architectural part thinking? Is matter’s revival a mere importation of formal attributes? Which are the requirements for a radical part-thinking in architecture? Is matter well equipped for this endeavour? In short, are the notions of matter and part-thinking compatible in an architectural environment?
Pre-Socratic philosophy defined matter as a formless primordial substratum that constitutes all physical beings. Its irrevocable condition is that of being “ultimate”: matter lies in the depth of reality as more fundamental than any definite thing.[1] Under this umbrella, pre-Socratic philosophy ramifies in two branches: the first one associates matter with continuity, the second one associates matter with discretism.
Anaximander is the standard-bearer of the first type: the world is pre-individual in character and it is fueled by the apeiron, a continuum to which all specific structures can be reduced. We can find traces of this sort of materialism in Gilles Deleuze’s “plane of immanence”, Bruno Latour’s “plasma”, or Jane Bennett’s “vibrant matter”. Democritus is the figurehead of the second type: the world is composed by sets of atoms, that is, privileged discrete physical elements whose distinct combinations constitute the specific entities that populate the world. Resonances of this sort of materialism can be found in the “quanta” of contemporaneous quantum mechanics. Independently of their continuous or discrete nature, both types of materialisms are underpinned by an ontological assumption: the identification of matter with an ultimate cosmic whole. To this purpose, matter’s generic condition is decisive: its lack of specificity is precisely what grants matter the status of “ultimate”, which logically and chronologically precedes distinction.
Architecture’s conceptualisation of matter has not been impermeable to these philosophical discourses. In spite of the negative reputation that the Aristotelian hylomorphism projected on matter by converting it into the reverential servant of form – absent in pre-Socrátic philosophy and being introduced, in different ways, by Plato and Aristotle – in the last centuries many architectural projects opposed this status quo by capitalising on both types of materialism. Since the Enlightenment and still under form’s reign, matter has been recovering its pre-Socratic positive character by absorbing all the attributes traditionally ascribed to form. However, it also operated a conceptual replacement that is crucial in this discussion: matter moved from a marginal role in a hylomorphic dualist scheme to the solitary leadership of an ultimate holism. As we will see below, in architecture and particularly since the Enlightenment, matter’s relevance has been gradually recovered through its association with two key concepts: truthfulness, emphasised by authors of the late 18th and 19th century such as Viollet le Duc or Gottfried Semper, and vitalism, underlined by authors of the 19th century and early 20th century such as Henry Bergson or Henri Focillon.[2] Today this process has culminated with Eric Sadin’s notion of antrobology, that is, the “increasingly dense intertwining between organic bodies and ‘immaterial elfs’ (digital codes), that sketches a complex and singular composition which is determined to evolve continually, contributing to the instauration of a condition which is inextricably mixed ‘human/artificial.”[3]
In this technological framework and through the notions of information, platform and performance, matter’s traditional attributes have been replaced by those of form. Despite keeping the term “matter” as a signifier, the disorder, passivity and homogeneity that conventionally characterised its significance have been substituted by form’s structure, activity and heterogeneity. However, one crucial feature that is absent in the dualistic hylomorphic model has been reintroduced: matter’s pre-Socratic condition of being ultimate.
This incorporation is decisive when it comes to architectural part-thinking. In spite of the great popularity that matter has achieved within contemporary experimental architecture, its ultimate condition precludes any engagement with architectural part-thinking: either as a single continuous field or as a set of discrete particles, matter exalts a single holistic medium that lies at the core of reality, that is, a fundamental substrata (whole) in which all specific entities (parts) can be reduced. In a context in which designers use the power of today’s super computation to notate the inherent discreteness of reality instead of reducing it to simplified mathematical formulas,[4] or field, reality’s approach through generic and Euclidean points (particles) rather than distinct elements (parts) constitutes an unnecessary process of reduction that dissolves part’s autonomy.
This essay develops this argument in two steps. First, it states that the current culmination of matter’s revival process in experimental architecture is, paradoxically, nothing but the exaltation of form; under the same signifier, matter’s signification has been replaced by form’s signification: all attributes that in the hylomorphic model were associated with the latter have now moved to the former, converting matter’s signifier into just another term to conjure up the significance of form. However, there is a crucial pre-Socratic introduction in relation to the hylomorphic model: matter is now understood as being also the ultimate single substance of reality, and not just the compliant serf of another element (form). This holistic vocation can be traced in contemporaneous experimental architecture in parallel to matter’s pre-Socratic distinction between a continuous field (Anaximander’s apeiron) and a discrete set of particles (Democritus’s atoms).
Second, this essay argues that current materialism, in any of its twofold registers, is incompatible with architectural part-thinking. The argument first identifies and evaluates three groups of architectural parts (topological, corpuscular and ecological) in the current experimental architectural landscape and second proposes a fourth speculative architectural part based on the notion of limit. If the idea of part demands a certain degree of autonomy from the whole, it cannot be reducible to any ultimate substrata, and therefore matter’s holistic condition becomes problematic both in its continuous and discrete register. However, the latter demands particular attention: discretism’s spatial countability might lead us to confuse the notion of particle with that of part. However, they significantly differ: while particles are discrete only from a mathematical perspective (countable), parts are discrete as well from an ontological perspective (distinct). Parts require at least both dimensions of discreteness in order to be considered autonomous from any exteriority, while simultaneously keeping its capacity to participate in it.
Architectural part-thinking demands then a radical formal approach. It requires a notion of form that operates at every level of scale, that is, an immaterialist model that recursively avoids any continuous (field) or discrete (particle) ultimate substrata in which parts could be reduced. This pan-formalism would imply then the presence of a form beyond any given form, understanding the term “form” as an autonomous spatio-temporal structure.
Matter’s Recovery Process in Architecture: Truthfulness, Vitalism and Antrobology
Since Ancient Greece, architecture has interpreted the notion of matter through Aristotle’s hylomorphic scheme: matter is a disordered, passive and homogenous mass (matter) in attendance for a structured, active and heterogeneous pattern (form). According to this framework the architect is constituted as a demiurge: they operate from a transcendent plane in order to inform matter, that is, in order to structure its constitution through a defined pattern. However, since the Enlightenment, matter’s signifier has gradually replaced its signification with that of form through three concatenated strategies: truthfulness, vitalism and antrobology.
The concept of truthfulness in architecture should be read in opposition to the idealism of authors like Alberti or Palladio. In his De Re-aedificatoria, Alberti claimed that “architecture is not about artisan techniques but about ‘cose mentale’.”[5] What concerned him was not material attributes such as colour or texture, but the geometrical proportions of the forms that he produced with matter. This statement becomes evident in his façade for the Malatesta Temple in 1450.

Conversely, some centuries later authors like Ruskin, Viollet-le-Duc or Semper defended the relevance of matter in architecture, asserting that the choice of a material should depend on the laws dictated by its nature, such that “brick should look like brick; wood, wood; iron, iron, each according to its own mechanical laws.”[6] Rondelet and Choissy also gave importance to the truth of the material, particularly throughout their exhaustive constructive drawings.

However, this group of authors still remained idealistic: the use of materials was determined by the idea that the architectural object was intended to express. In that sense, and although its internal structure was recognised, matter was still subordinate to an external idea, that is, to an external form.
Some decades later, in his Life of Forms in Art (1881) Henri Focillon dignified matter through a strategy based on a different concept: vitalism. Although arguing that the development of art is inextricably linked to external socio-politic and economic characteristics, Focillon associated an autonomous formal mutation to it through underlining matter’s inherent capacity of movement and metamorphosis. Already present in the Baroque and empowered by the Enlightenment’s idea of “natura naturans”, concepts like the “Bildungstrieb”, the “Thatkraft” or the “Urpflanze” articulated a vitalist approach to matter closely related to German Expressionism. Ruskin and Semper’s seminal materialism based on material’s truth gave way to a radical pragmatism in which architects used hybridised materials in order to relate to natural metamorphosis. Many glass-based projects from the early 20th century replicate these morphogenetic processes, an attitude already present in the gothic. In resonance with Bergson’s élan vital, a hypothetical force that explains the evolution and development of organisms, certain uses of concrete imitated the formal exuberance of some morphogenetic natural processes, as can be seen in the Goetheanum from Rudolph Steiner in 1928 or Einstein Tower from Erich Mendelsohn in 1921, but also with different materials in the Großes Schauspielhaus from Hans Poelzig in 1919.

Moreover, the use of concrete established a continuity between form and structure characteristic of the organic beings that were so greatly admired at that time. As a consequence, a progressive material vitalism was thus constituted through an hylozoic approach based on Einstein’s theories of matter and energy interconvertibility, which suggested a comprehension of matter as a set of energetical perturbations instead of mere inert mass. In this sense and according to Henry van de Velde, matter had not only a mechanical value, but an active dispositionality that was the consequence of its “formal vocation”. However, vitalism had also its conservative reverse. Fueled by the phenomenological work of Rasmussen and Norberg-Schulz, architects such as Herzog & Meuron, Steven Holl or Peter Zumthor propose a haptic approach to architecture that relies on materials as symbolic shapers of architectural space. Under this scenario and in close relation to Merleau-Ponty’s notion of “flesh”, matter is still understood as a holistic repository of tactile and cultural memory.
In parallel to the general disdain that Modernism showed for materiality during the first half of the 20th century, according to Eduardo Prieto truthfulness and vitalism have gradually contributed to the reconsideration of matter as a substance with a certain agency.[7] This process was based not on the exaltation of the passivity, neutrality and homogeneity that originally characterised matter, but on the importation of attributes from the notion of form. Ruskin’s truthfulness is based precisely on the understanding that matter has a specific inner character that makes it heterogeneous, while the vitalism of Steiner alludes to the metamorphic capacities of living beings.
However, both cases remain idealistic. Truthfulness asserts the need for an external form to choose the matter that best suits its purposes. Vitalism claims that matter should be seen as a material of organic expression that still needs an artist or architect to unveil its aesthetic potentialities of metamorphosis. In both cases, matter is still seen not just in opposition to an external form, but also under its control. In this sense, the vitalism defended by Bergson differs from the vitalism of Deleuze: for the former, matter is still a generic substance that needs an artist to particularise it, that is, needs an élan vital to form it. Conversely, for Deleuze, matter is an immanent reality: it provides form to itself and does not require any transcendental agent. This Deleuzian conception of matter has been emphasised today through New Materialism, whose statements in relation to the problem matter-form are based “on the idea that matter has morphogenetic capacities of its own and does not need to be commanded into generating form.”[8] In this sense, matter is no longer seen in opposition to form, that is, “it is not a substrate or a medium for the flow of desire, but it is always already a desiring dynamism, a reiterative reconfiguring, energised and energising, enlivened and enlivening.”[9]
This philosophical approach reverberates with our current technological condition. After the stages of truthfulness and vitalism, Sadin’s antrobology culminates an architectural recovery of matter that paradoxically is based in the replacement of its signification by that of form. Faced with a dual ontology that is no longer alluding to Heideggerian human nudity but to a planet inhabited by algorithmic beings that live with and against us, Eric Sadin defines our technological scenario as Antrobological. This notion expresses the “increasingly dense intertwining between organic bodies and ‘immaterial elfs’ (digital codes).”[10] The propagation of artificial intelligence and the multi-scalar robotisation of the organic establishes, in addition to a change of medium, a change of condition: its algorithmic power does not merely offer itself as an automatic pilot for daily life, but it also triggers a radical transformation of our human nature, setting up a perennial and universal intertwining in between bodies and information. In this sense, the multidisciplinary generalisation of machine learning, progress in genetic engineering or the robotisation of the mundane no longer refer to a humanity that is merely improved or enriched, but to a humanity that is intertwined: it is unfolded through a physiological platform that is woven by algorithmic, organic, robotic and ecologic agents whose symbiosis is not metaphorical or narrative, but strictly performative. It is precisely under this scenario that “artificial extelligence” becomes “artificial intelligence”: it executes an exercise of incorporation in which the intelligence, eidos, or what has traditionally been understood as form is no longer an external entity that articulates matter from outside, but is its immanent circumstance.
The historical and incremental process of matter legitimation, based initially on the truthfulness of Ruskin and the vitalism of Steiner, culminates today with the celebration of the notions of platform, information and performance that singularise Sadin’s antrobology. Recent theorisations on concepts related to computation and design such as Keller Easterling’s “medium”[11] or Benjamin Bratton’s “stack”[12] are as well deeply underpinned by these three expressions. However, it is crucial to note that the term “form” is present in all of them, associating each expression to one of the three main form’s attributes: structure (information), activity (performance) and heterogeneity (platform).
While matter “is that which resists taking any definite contour”,[13] form refers to the active presence of a distinguished and qualified non-ultimate structure containing other forms at every level of scale and that can occasionally change and establish relationships. It is under this framework that the previous terms should be read in relation to experimental architecture. To provide a platform means to provide the conditions for an evolving intertwining in between forms that permits the promiscuous co-existence of difference, that is, of heterogeneity. Thus, a platform is not a field: in opposition to the latter, the former doesn’t permit any sort of reductionism, that is, its elements are not mere emergences, as occurs with fields, but singularities with distinct origins. To provide information means to provide structure: it precludes disorder by establishing a spatio-temporal non-ultimate organisation. However, given that every entity already has a form and we cannot imagine a formless element, to inform means actually to transform. To provide performance, in contrast, means to present rather than represent: it produces an operative impact on the set of conditions in which it is placed, instead of merely representing an absent entity, as would be the case of a metaphor.
Under Sadin’s antrobology, the disorder, passivity and homogeneity that traditionally identified matter are replaced by those characteristics that qualified form in the hylomorphic model: structure (information), activity (performance) and heterogeneity (platform). However, if the process of legitimation of matter is rooted in replacing its attributes by those of form, it is increasingly more unsustainable to keep referring to it as “matter”, when actually, especially in Sadin’s antrobology and from a hylomorphic point of view, matter is actually empty of matter and full of form.
Matter’s Ultimate Condition and Part-Thinking
However, the rupture of the hylomorphic dichotomy caused by matter’s absorption of form has implied the introduction of a pre-Socratic matter’s condition: that of being ultimate. Matter is not understood anymore as one of the components of a dualistic model, but as a single holistic substance whose structure, activity and heterogeneity underlies the emergence of any specific entity. This model, technologically underpinned by Sadin’s antrobology, has been articulated by contemporaneous experimental architecture according to the two types of materialism that differentiate pre-Socratic philosophy: as a continuous field (Anaximander’s apeiron) or as discrete particles (Democritus’s atoms). However, its common “ultimate” condition obstructs architectural part-thinking: if the notion of part demands an autonomy that cannot be exhausted neither in its outer participation in a bigger ensemble nor in its inner constitution through a smaller ensemble, matter’s holism becomes problematic. Indeed, if any entity (part) can be deduced from a privilege underlying substrata (whole), its autonomy is called into question.
Anaximander’s apeiron model is the most popular representative of pre-Socratic continuous approaches to matter. For the greek philosopher, apeiron refers to the notions of indefinite and unlimited, alluding explicitly to the origin (arché) of all forms of reality. Precisely because apeiron, as suggested by its etymology, is that which cannot be limited, it doesn’t have in itself any specific form, that is, it is not definable. It is therefore a continuous material substrata, vague and boundaryless, capable of supporting the opposites from which all the world’s differentiation emerges. Besides Bruno Latour’s ‘plasma’, described by its author as that unknown and material hinterland which is not yet formatted, measured or subjectified, one of the most popular contemporaneous elaborations of this apeiron’s holistic theory is Jane Bennett’s “throbbling whole”. For the American philosopher, objects would be “those swirls of matter, energy, and incipience that hold themselves together long enough to vie with the strivings of other objects, including the indeterminate momentum of the throbbing whole”, something that according to Harman “we already encountered, in germ, in the pre-Socratic apeiron”.[14] Beside pure formal continuities such as Alejandro Zaera’s Yokohama (2000) or François Roche Asphalt Spot (2002), we can find a similar holistic vocation in projects such as Neri Oxman’s BitMap Printing (2012), Mette Ramsgard Thomsen’s Slow Furl (2008), and Poletto-Pasquero’s Urban Algae Follies (2016). Its renovated notion of matter is usually referred to as behavioural matter, living matter, ecological matter, digital matter, expanded matter, data-driven matter or intelligent matter.
Paradoxically, what is relevant in all these expressions is not the term matter, but its qualifier, which systematically refers to spatio-temporal formal arrangements rather than hylomorphic matter attributes, emphasising the relevance of form as identifier over matter. Nery Oxman’s “material ecology” is an emblematic example of this phenomena. Oxman defines this expression as “an emerging field in design denoting informed relations between products, buildings, systems and their environment”.[15] The architect uses the term “informed” referring to information and therefore alluding to matter’s inner structure. However, if “matter” is informed, it is no longer a homogeneous and amorphous substance, but it contains a digital or a physical structure that operates at every level of scale. Her project Bitmap Printing (2012) acts as a platform that intertwines between natural, human and algorithmic agents, whose activity has performative consequences rather than symbolic references. In this sense, given that the project is informed, acts as a platform and performs, it is hardly understandable why, under a hylomorphic scheme, we refer to them as specific configurations of matter rather than as a particular type of form.
However, these three projects, together with the work of authors such as Marcos Cruz, Phillip Beesley or Areti Markopoulou, introduce a pre-Socratic’s matter attribute absent in the hylomorphic scheme: matter’s condition of being ultimate. In particular, we can find this pre-Socratic’s matter attribute in the continuous version developed by Anaximander through the notion of apeiron. As we can see in projects such as the Hylozoic Garden (2010) by Philip Beesley, full relationality and complete interconnectedness are the basis of a systemic approach to architecture in which the conceptual idea of field articulates Delanda’s “continuous heterogeneity”.
The project is based on the ancient belief that matter has life and should be understood, according to its author, as an active environment of processes rather than as an accumulation of objects. Unlike hylomorphic matter, the anti-maternalistic matter evoked by the Hylozoic Garden does not contain an Aristotelian pattern that provides structure to it, but is instead self-formed, that is, structured, active and heterogeneous. However, specific parts are always an emergence from an underlying holistic field, that is, a whole. Indeed, continuity is actually capable of producing objects, that is, continuity on one level creates episodic variation on the next that may be presented as discrete elements, but they are always dependent on this first gradual variation. Under this scheme, part-thinking is very limited because specificity is always a deduction from a privilege underlying substrata. Parts are then prevented from its autonomy, being instead exhausted in its participation as subsidiary members of a whole. As Daniel Koehler suggests, “departing from parts a preconceived whole or any kind of structure does not exist. Parts do not establish wholes, but measure quantities.”[16] And quantities, indeed, begin with individuals, that is, with discreteness.
However, the notion of “discreteness” needs differentiation: not all the interpretations of this term permit to understand its individuals as parts. In this sense, it is crucial to note that pre-Socratic philosophy articulates as well a type of materialism based on discreteness: beside the continuity emphasised by Anaximander’s apeiron, Democritus’s atomic model is the most popular representative of this discrete approach to matter. For the Greek philosopher, atoms are not just eternal and indivisible, but also homogeneous, that is, generic. Although atoms differ in form and size, its internal qualities are constant in all of them, producing difference only through its grouping modes. Atoms are then particles: generic individuals whose variable conglomerates produce the difference that we observe in the world. As Graham Harman affirms, this form of materialism is “based in ultimate material elements that are the root of everything and higher-level entities are merely secondary mystifications that partake of the real only insofar as they emerge from the ultimate material substrate.”[17]
The atomic model is thus a reductionist model: the different specificities that conform the world are mere composites of a privileged and ultimate physical element. In opposition to the continuous form of materialism, the discrete atomic type is easily misunderstood when it comes to considering its part-thinking capacities due to a frequent confusion: that between “part” and “particle”. This association is especially present nowadays in architectural experimental design, particularly under the notion of “digital” and its inherent discrete nature. Computation’s power of today has been aligned with this position through the recognition that “designers use the power of today’s computation to notate reality as it appears at any chosen scale, without converting it into simplified and scalable mathematical formulas or laws.”[18] It assumes “the inherent discreteness of nature”,[19] where the abstract continuity of the spline doesn’t exist. However, this process of architectural discretisation needs differentiation in order to be understood in relation to the notion of part, defined here as an interactive and autonomous element which is not just countable (mathematically discrete) but also distinct (ontologically discrete). Within the contemporaneous discrete project, three groups of architectural approaches to the notion of part, together with a speculative proposition, need to be distinguished according to its relation with matter’s ultimate condition: topological parts, corpuscular parts, ecological parts and limital parts.
Topological Parts, Corpuscular Parts, Ecological Parts, Limital Parts
There is a first group of proposals in which parts are topological parts; in spite of the granular appearance of its architectural ensembles, its vocation is still derivative from the parametric project: the continuity of its splines has reduced its resolution through a process of “pixelisation”, but it still operates under the material logic of an ultimate field. The notion of topology should be read here under the umbrella of the Aristotelian concept of topos. While Plato’s term chora refers to a flat and neutral receptacle, the term topos refers to a variable and specific place. In contrast to the flat spaces of modernity, the three-dimensional variability of 1990s spaces produces topographic surfaces in which every point is singular. This results in “a constant modification of the space that leads to a changing reading of the place,”[20] implying the shift from Plato’s chora to Aristotle’s topos. Unlike the universal abstraction of the former, in the Physics, Aristotle “identifies the generic concept of space with another more empirical concept, that of ‘place’, always referred to with the term topos. In other words, Aristotle looks at space from the point of view of place. Every body occupies its specific place, and place is a fundamental and physical property of bodies.”[21]
This is very clear in the following text by the Stagirite:
“Again, place (topos) belongs to the quantities which are continuous. For the parts of a body which join together at a common boundary occupy a certain place. Therefore, also the parts of place which are occupied by the several parts of the body join together at the same boundary at which the parts of the body do.”[22]
Aristotle defines topos as a continuous and three-dimensional underlying substratum, but above all as an empirical and localised substratum.
The rhizomatic twists associated with these projects and underpinned by the intensive use of computational tools seem to oppose the homogeneity of its parts. According to Peter Eisenman, “while Alberti’s notational systems transcribed a single design by a single author, computation has the capacity to produce multiple iterations that the designer must choose from.”[23] Computers function as generators of variability, a fact that seems to promote Eisenman’s inconsistent multiples, calling into question Alberti’s homogeneous spatiality. However, in spite of being countable and distinct, the constitution of the parts associated with projects such as BIG’s Serpentine’s Pavilion (2016) and The Mountain (2008) or Eisenman’s Berlin Memorial (2005) is reducible to one single formula or equation, that is, a consistent and calculable single medium (parametricism). Its discrete look is provided by a set of elements which are countable, distinct and interactive, but that cannot be read as parts because its autonomy is restricted for a twofold reason: both its distinction and position depend on an ultimate system of relations which is external to the logics of its individuals, evoking therefore apeiron’s type of materialism. In this sense, parts here should be read as components: the location and form of them is subordinated to the topological bending of a general surface, precluding any type of part’s autonomy.
There is a second group of experimental projects in which parts are corpuscular parts. In these parts architectural ensembles are formalised through countable and qualitatively identical corpusculi, that is, individual entities which are not systematised by any external and preconceived structure. Its advocates follow a path similar – even if this is not their conscious intention – to that of Walter Gropius, Mies van der Rohe and Le Corbusier when they freed themselves from the straitjackets of the symmetry characteristic of 19th century’s Beaux-Arts, championed by architects such as Henri Labrouste or Felix Duban. However, corpuscular parts differ from modern parts in the fact that they are formally identical in between them despite performing different functions. Mario Carpo relates some of this work with Kengo Kuma’s Yure Pavilion (2015) and GC Prostho Museum Research Center (2010) under the expression “particlised.”[24] The term relates to the non-figural, aggregational or atomised way of producing architecture, in which Kuma states that “each element needs to be relieved from contact or structure beforehand, and placed under free conditions.”[25]
Experimental projects such as Bloom (2012) by Alisa Andrasek and José Sánchez or Flow (2016) by Calvin Fung and Victor Huynh participate as well in this process of “particalisation” by relying on an ultimate, generic and privileged element: in opposition to modernist assemblies and in resonance with some of the early work of Miguel Fisac, “the buildings blocks are not predefined, geometric types – like columns or slabs – that only operate for a specific function,”[26] and unlike parametricism they do not derive from a predefined whole.
Instead, the particle’s specific function is an emergent attribute of its interaction. In this sense, what gives specificity to these generic particles is not an a priori and fixed structure as modernism, but a posteriori and evolving relationality with the world. This is problematic with the requirement of autonomy demanded by parts for two reasons. On the one side, if part’s specificity is exhausted with its outer relationality, its nomos is coming from outside and we are therefore in Kant’s heteronomy rather than autonomy. On the other side, if parts are originally generic, they refer to an original standard type which is holistic precisely because it is shared by default by all its members. The fact that specificity is an emergent property in which parts are defined exclusively by their relationships with other parts has been interpreted as their emancipation with respect to the notion of whole. Timothy Morton describes this type of relational process as “simply the last philosophical reflex of modernity”.[27]
Indeed, the instrumental reason characteristic of modernity is still behind this type of operation because emergent processes are teleological processes. “Emergence is always emergence for”[28] because there is always a holistic target that subjugates the parts to the benefit of the whole. As such, we are not dealing with a mereology of parts, but rather a mereology of particles: each element is not an incomplete piece that is unique in its identity and therefore irreducible (part), but rather a generic ultimate element that becomes specific at the price of being relationally dissolved into the whole of which it belongs (particle). Its being is defined precisely by the relationships it establishes with other elements, and those relationships are the way they are because they are beneficial to a whole.
Timothy Morton affirms that moving past modernity implies the need for a “philosophy of sparkling unicities; quantised units that are irreducible to their parts or to some larger whole; sharp, specific units that are not dependent on an observer to make them real.”[29] Despite their local character, the relations that regulate individuals undervalue the parts on the one hand and overvalue the whole on the other. They undervalue the parts by fully determining their specific behaviour according to external factors, its original character being generic. They overestimate the whole by varying individual’s specific behaviour according to the benefit of the whole. This position facilitates the emergence of a framework in which bits are associated literally with parts and the act of counting is frequently confused with an act of discretisation. It is then crucial to differentiate mathematical discreteness from ontological discreteness. While the first one alludes to countable elements (particles), the second one alludes to distinct elements (parts).

The lack of distinction characteristic of generic particles prevents its approach through an exercise of architectural “part-thinking”. Instead, we are confronted with the discrete type of materialism elaborated by pre-Socratic philosophy. Although its ultimate condition permits individual’s participation, it ignores its autonomy’s requirement for part-thinking under a masked heteronomy, which provides specificity to generic particles at the cost of its exhaustion under external relationality.
There is a third group of recent experimental architectural proposals in which parts are ecological parts; they operate as a set of distinct objects that intertwine with one another under the gravitational field of different systems. The notion of ecology should be interpreted here in keeping with the etymology of the Greek term oikos. Its meaning is that of “house” understood as the set of people and objects forming a domestic space and being regulated by the economy (the nomos of the oikos).
However, the term oikos has traditionally been associated with another very similar one: oikia. Both have been translated as “house”, in the most general sense of the word. Nonetheless, Xenophon outlines a distinction[30] that, although not entirely accepted by all Greek authors, is very useful in approaching the question at hand. The Greek philosopher asserts that the expression oikos refers to a house in the strict sense of a place of residence, whereas the expression oikia denotes not only the house but also the property it contains and its inhabitants.
Based on this distinction, the word oikia would refer to a collection of elements of different natures and sizes whose coexistence and eventual interlacement would give rise to a specific spatial conception. It is formed not only by the house itself, but also by the property it contains (animals, instruments, jewellery, furniture, etc.) and its inhabitants. It would therefore be a large composite of objects whose eventual interlacements over time would form what Xenophon defines as domestic space. In that sense, these spaces not only contain and are contained by other spaces simultaneously, they also never appear as completely closed elements, despite remaining identifiable and extractable. Oikia is then not produced from a passive Platonic receptacle (chora) or an active Aristotelian substrate (topos); it is constructed instead from the multi-scalar co-existence of various groups and subgroups of systems. The ecological parts characteristic of this branch of experimental architectural projects represent, in different ways, a departure from the materialism analysed in previous cases. They find an example avant la lettre in the work of Jean Renaudie, particularly in his two housing complexes in Ivry sur Seine (1975) and Givors (1974).
Although not all parts fully coincide with the definition provided here, the discreteness of the projects operates with autonomous discrete entities that cannot be interpreted under a materialistic framework; there is no ultimate element acting as an underlying substrata (continuous or discrete) to which entities can be reduced. However, as we have seen, the notion of ecology implies the presence of oikia, that is, a house, a common denominator whose presence can be traced in these projects by a formal homogeneity that traverses the whole composition.
We can find a wide range of experimental architectural formal strategies working in this direction. Daniel Kohler’s Hyper-Nollie (2019) develops a complicit discreteness with more than 40 different parts that are always cooperative and incomplete, never single entities, never fully defined, never identical. However, the continuous connection of its spaces and the fact that each one of them is accessible from each part seem to formally evoke the logics of a relational field, particularly through the homogeneous granularity revealed by a general overview. Nevertheless, the project’s tension between the distinct discreteness of its close view and the texturised continuity of its far-view precludes any attempt to simply reduce its parts to an underlying material substrata: each part positions its own context’s interpretation through a complex balance in between identity (inherent distinction) and relationality (local complicities).
Although its assumption of the voxel as a standard unit and its complicity with Christopher Alexander’s notion of structure, Jose Sánchez’s Block’hood (2016) tends as well to avoid the possibility of any full material reductionism to any ultimate being. In spite of its underlying 3D grid, the project provides each voxel with a specific performative behaviour whose specificity is not merely underpinned by relationality, but is partly inherent to its constitution. In this sense, each unit approaches our definition of part because despite its underlying common framework, voxel’s singularity cannot be merely reduced to it or to its relations. Rasa Navasaityte’s Urban Interiorities (2015) approaches the notion of part through a recursive structure of groups inside groups: there is not any ultimate element from which the rest of compositions can be derived, but a recursive process.This partly acts as a holistic system of form production, at the same time permitting the presence of distinction beyond countability.
These projects represent the different nuances of a part: they operate through the tension established in between part’s autonomy and part’s participation, e.g. the part’s capacity to be inherently distinct and at the same time the part’s capacity to retain something in common with other parts in order to permit local and ephemeral complicities. This type of mereology resonates with what Levi Bryant has defined as a “strange mereology”: “one object is simultaneously part of another object and an independent object in its own right.”[31] Indeed, on the one side, the parts that we have seen in this last group of projects are autonomous beings in the world that cannot be reduced to other parts. But at the same time, parts are composed by other parts, compose other parts, and relate with other parts. In synthesis, part-thinking demands parts execute what seems to be a paradox: its constitution as a countable and distinct entity that is both independent and relational.
We could synthesise the different approaches towards the definition of part presented here as follows: the first group of projects, constituted by what we have defined as topological parts, leaves aside part’s autonomy in favour of an underlying field of relations. The second group, whose parts are defined as corpuscular parts, emphasises part’s countability (mathematical discreteness) instead of part’s inherent distinction (ontological discreteness). The third group, composed by ecological parts, still retains a vague remainder of a general background (oikia) that vectorises part’s distribution. In all of them, matter’s ultimate condition is still present, although in a blurry and definitely weakened version, particularly in the last one. However, we could briefly speculate with a fourth group of architectural parts, associated with the notion of limit, that would emerge from the radical limitation of matter’s ultimate condition.

The notion of limit is at the core of architecture. If we understand the architectural practice as the production of interiorities, that is, as the production of spaces within spaces, the idea of a border distinguishing them is decisive. In this sense, the etymology of the term “temple” is particularly revealing: its root “-tem”, present also in the terms témenos, templum, and “time”, indicates the idea of a cutout, a demarcation, a frontier, a limit instrumentalised in order to separate the sacred realm of the gods from the profane territory of humans. In ancient Rome, the construction of a temple began with the cumtemplatio, the contemplative observation of a demarcated zone of the sky by the augurs. Through the attentive observation of birds, the sun and the clouds’ trajectories within the selected celestial area, the augurs interpreted the auspices of the city that was about to be founded. Once the observation was completed, the demarcated zone of the sky was projected onto the ground in order to trace the contours of the future temple, the germinal cell of the coming city. Cumtemplatio was thus cum-tem-platio: the tracing of the limits through which the cosmos took on meaning and signification by being projected onto the earth and establishing the ambit in which the humans could purposively inhabit the world. Thus, the temple instrumentalised the limit not just as a border between what is sacred and what is profane, that is, between inside and outside, but also as a space in itself, as a frontier territory mediating between the celestial realm of the gods and the terrestrial realm of humanity.
The spatialised register of the limit evoked by the temple and aligned with notions such as the Christian limbo or the Roman limes, lays the foundation for the type of immaterialist parts hypothesised here with the expression limital parts. They expand the decreasingly shy immaterialism present in topological parts, corpuscular parts and ecological parts by limiting the reduction to any sort of matter’s ultimate condition. In order to do so, limital parts are liminal, limited, and limitrophe, three decisive attributes aligned with supercomputation’s capacity to avoid parametric reductionism.
First, limital parts are liminal, that is, they are the locus of junction and disjunction. The notion of liminality should be read under its instrumentalisation by Arnold van Gennep and Victor Turner: the limit is not the Euclidean divider line that is at the core of the Modern Movement’s programmatic zonification, but, the limit is, in its anthropological register, the frontier territory that in a rite of passage mediates between the old and new identity of its participants. Parts’s liminality constitutes a daimonic space whose nature is that of “differential sameness and autoreferential difference,”[32] if the limit is in itself and by itself internal differentiation, if in its re-flection the limit separates and divides, then limital parts should necessarily join and disjoin, or, more accurately, limital parts should join what they disjoin. The liminality of limital parts does not mean that its composition is simply the random juxtaposition of a litany of solipsistic monades: in their symbiotic intertwinings, the different liminal parts establish clusters and sub-clusters of performative transfers that are constantly sewing and resewing the limit’s limits: their operativity is not always structured by harmonic consensus, but they engage in constant resistance and deviation. They produce spontaneous symbiotic interlacements that overlap without any preconceived agreement and certainly not without décalages, displacements and misfits.
Second, limital parts are limited, that is, they are distinct and determined. The notion of limitation should be read under its Hegelian instrumentalisation: “The limit is the essentiality of something, it is its determination.”[33] Thus, to limit means to define; the latin term definire signifies to trace the borders of something in order to separate it from its neighbours. Definire is the establishment of finis, ends. However, the term finis should not be read here only under the light of its topological or chronological sense, but it should also be approached in its ontological register: to define means to specify the qualities of a part that make a part this part and not that part, avoiding its reduction to any ultimate material substrata. It traces an ontological contour in order to limit the part’s infinite possible variability. A limited part refers thus to a distinct part; it is determined, but not predetermined, that is, it is not determined avant la lettre. It contrasts with what is open, flexible and generic; in a context where the power of today’s supercomputation makes it possible to notate the inherent discreteness of reality, it is no more necessary to design with simplified spatial formulas (fields), or repetitive spatial blocks (particles). Today’s computational power applied to architectural design allows an emancipation from reductive laws, whose standardisation is at the core of the material remanences of topological parts, corpuscular parts and ecological parts. Thus, rather than formulative and open parts, the unprecedented power unfolded by supercomputation lets us operate with massive sets and sub-sets of distinct parts. The limited condition of limital parts does not align with the notion of the generic, nor with derivative concepts such as flexibility, adaptability or resilience, so common in the three previous groups of architectural parts. Thus, rather than flexible, limital parts are plastic (plastiquer, plastiquage, associated in French to the notion of explosion): they vary, but at the price of gaining a new specificity and cancelling the previous one.
Third, limital parts are limitrophe, that is, they are foliated. The notion of limitrophy should be read in light of its instrumentalisation by Jacques Derrida. Rather than effacing or ignoring the limit, Derrida attempts, through his use of the term “limitrophy”, “to multiply its figures, to complicate, thicken, delinearize, fold, and divide the line precisely by making it increase and multiply.”[34] Limital parts are thus thickened, which is the literal sense of the Greek term trepho, that is, to nurture. Under this umbrella, a limitrophe part is not a solipsistic monade or a fragment referring to an absent whole. Limital parts produce inconsistent multiplicities by acquiring a foliated consistency and becoming an edgy, plural and repeatedly folded frontier. Limital parts shouldn’t orchestrate thus an abyssal and discontinuous limit: the latter does not form the single and indivisible line characteristic of modernity, rather, it produces “more than one internally divided line.”[35] Thus, limital parts grow and multiply into a plethora of edges. Precisely because of their liminal, limited and limitrophe condition, limital parts are immaterialist: they are not reducible to one, as is the case, with decreasing intensity, of topological parts, corpuscular parts or ecological parts.
Ending Considerations
Avoiding matter’s ultimate condition requires understanding form as a spatio-temporal structure that operates at every level of scale. It demands the assumption that there is always a form beyond any given form, avoiding any continuous (field) or discrete (particle) ultimate background in which parts could be reduced. In this sense and as Graham Harman affirms, “although what is admirable in materialism is its sense that any visible situation contains a deeper surplus able to subvert or surprise it,”[36] the kind of formalism approached here does not deny this surplus, it merely states that this surplus is also formed.
The impossibility of conjugating matter’s ultimate condition with a radical part-thinking would suggest a pan-formalism based on a Matryoshka logic, a multiscalar recursivity that doesn’t rely on an ultimate and maternal underlying substrata. Under this framework and building on the German and Russian formalist traditions later developed by figures such as Colin Rowe, Alan Colquhoun, Alexander Tzonis or Liane Lefaivre, the formalism that could emerge from these statements shouldn’t be understood in the sense that there is no excess beneath the architectural forms that are given, rather, in the sense that “the excess is itself always formed.”[37]
The constant and multiscalar presence of form and the avoidance of any ultimate substrata are posited as the two conditions that a radical part-thinking would require; they represent the only way in which the notion of part can be understood in its full radicality, that is, as an interactive and autonomous element which is not just countable (mathematically discrete) but also distinct (ontologically discrete). As we have seen, this approach is incompatible with matter’s understanding: despite matter’s revival has paradoxically imported all the attributes associated with the hylomorphic understanding of form, the re-introduction of pre-Socratic’s ultimate condition represents the clandestine re-introduction of the notion of whole and therefore an unsurpassable obstacle for part-thinking.
[1] G. Harman, “Materialism Is Not the Solution”, The Nordic Journal of Aesthetics, 47 (2014), 95.
[2] E. Prieto, La vida de la materia (Madrid: Ediciones Asimetricas, 2018), 28-102.
[3] E. Sadin, La humanidad aumentada (Buenos Aires: La Caja Negra, 2013), 152.
[4] M. Carpo, The Second Digital Turn: Design Beyond Intelligence (Cambrige: MIT Press, 2017), 71.
[5] Alberti, Re-Aedificatoria, (Madrid: Ediciones Asimétricas, 2012), 21.
[6] G. Semper, The Four Elements of Architecture and Other Writings (Cambridge: Cambridge University Press, 1969), 45-73.
[7] E. Prieto, La vida de la materia (Madrid: Ediciones Asimétricas, 2018), 28-102.
[8] M. Delanda, “Interview with Manuel Delanda”, New Materialism: Interviews and Cartographies, 9.
[9] K. Barad, “Interview with Keren Barad”, New Materialism: Interviews and Cartographies, ed. Rick Dolphijn & Iris van der Tuin (London: Open Humanities Press, 2012), 59.
[10] E. Sadin, La humanidad aumentada (Buenos Aires: La Caja Negra, 2013), 152.
[11] K. Easterling, Medium Design (Kindle Edition: Strelka Press, 2018).
[12] B. Bratton, The Stack: On Software and Sovereignty (London: The MIT Press, 2016).
[13] G. Harman, “Materialism is Not the Solution”, The Nordic Journal of Aesthetics, 47 (2014), 100.
[14] Ibid, 98.
[15] N. Oxman, “Material Ecology”, Proceedings of the 32nd Annual Conference of the Association for Computer Aided Design in Architecture ACADIA (2012), 19-20.
[16] D. Koehler. Large City Architecture: The Fourth Part (London: 2018), 19.
[17] G. Harman, “Materialism is Not the Solution” The Nordic Journal of Aesthetics, 47 (2014), 100.
[18] M. Carpo, The Second Digital Turn: Design Beyond Intelligence, (Cambridge: MIT Press, 2017), 71.
[19] Ibid.
[20] A. Zaera, “Nuevas topografías. La reformulación del suelo,” Otra mirada: posiciones contra crónicas, ed. M. Gausa and R. Devesa (Barcelona: Gustavo Gili, 2010), 116-17.
[21] J. M. Montaner, La modernidad superada (Barcelona: Gustavo Gili, 2011), 32.
[22] Aristotle, Physis, trans. W.A. Pickard (Cambridge: The Internet Classics Archive, 1994).
[23] P. Eisenman, “Brief Advanced Design Studio”, last modified October 2014, https://www.architecture.yale.edu/courses/advanced-design-studio-eisenman-0#_ftn3.
[24] M. Carpo, “Particalised”, Architectural Design, 89, 2 (2019), 86-93.
[25] K. Kuma, Materials, Structures, Details (Basel: Birkhäusser, 2004), 14.
[26] G. Retsin, “Bits and Pieces” Architectural Design, 89, 2 (2019), 43.
[27] T. Morton, Hyperobjects, (Minneapolis: University of Minnesota Press, 2013), 119.
[28] Ibid.
[29] Ibid, 41.
[30] K. Algra, Concepts of Space in Greek Thought (London: Brill, 1995), 32.
[31] L. Bryant, The Democracy of Objects (Cambridge: MIT Press, 2017), 215.
[32] E. Trías, Los límites del Mundo (Barcelona: Ariel Filosofía, 1985), 121.
[33] G. W. F. Hegel, The Science of Logic (Cambridge: Heidelberg Writings, 1996), 249.
[34] J. Derrida, “The Animal That Therefore I am (More to Follow)” trans. David Wills, Critical Inquiry, 28, 2 (2002), 398.
[35] Ibid., 398.
[36] G. Harman, “Materialism Is Not the Solution”, The Nordic Journal of Aesthetics, 47 (2014), 100.
[37] Ibid.

In this article I will illustrate affordances of decentralised technologies in the context of commons governance. My aim is to summarise the conversation around the lecture “When Ostrom Meets Blockchain: Exploring the Potentials of Blockchain for Commons Governance” I gave in the Mereologies Open Seminar organised by The Bartlett School of Architecture at University College London on 25th April 2019. I will also extend the conversation by providing a concrete example of such affordances in the context of a community network.
What is Blockchain? Three Key Concepts around Decentralised Technologies
In 2008, an anonymous paper presented Bitcoin: the first cryptocurrency based purely on a peer-to-peer system.[1] For the first time, no third parties were necessary to solve problems such as double-spending, thanks to cryptography. The solution was achieved through the introduction of a data structure known as a blockchain. In simple terms, a blockchain can be understood as a distributed ledger. Distributed refers to a technical property of a system in which certain components are located on different computers connected through a network. The blockchain, in this sense, can be thought of as a “decentralised book” in which agreed transactions can be stored in a set of distributed computers. Data, such as the history of monetary exchanges generated by using cryptocurrencies, can be stored in a blockchain. The key aspect resides in the fact that there is no need to trust a third party, such as a bank server, to store that information.
Nakamoto’s article opened what is considered to be the first generation of blockchain technologies.[2] This generation, up to approximately 2013, includes Bitcoin and a number of crypto-currencies that appeared after it. The second generation, approximately from 2014 onwards, is the extension of these blockchains with capabilities beyond currencies in the form of automatic agreements or smart contracts.[3] Smart contracts can be understood as distributed applications which encode clauses that are automatically enforced and executed without the need for a central authority. They can be employed, for example, to enable the execution of code to provide certifications, such as obtaining a diploma or a registry of lands, according to previously mutually agreed rules. Again, the novel aspect here is the fact that the execution of such rules, in the form of computer instructions, is distributed across a large number of computers without the need of a central point of control.
Complex sets of smart contracts can be developed to make it possible for multiple parties to interact with each other. This fostered the emergence of the last of the concepts I will introduce around decentralised technologies: Decentralised Autonomous Organisations (DAO). A DAO is a self-governed organisation in which interactions between the members of the organisation are mediated by the rules embedded in the DAO code. These rules are sets of smart contracts that encode such interactions. The rules embedded in the code are automatically enforced by the underlying technology, the blockchain, in a decentralised manner. DAOs could, for example, hire people to carry out certain tasks or compensate them for undertaking certain action. Overall, this can be understood as analogous to a legal organisation, with legal documents – bylaws – which define the rules of interaction among members. The development of DAOs has been, unsurprisingly, significantly popular around financial services.[4] However, DAOs could be used to provide a wide variety of services or management of resources in a more diverse range of areas. A more artistic example of a DAO is the Plantoid project,[5] a sculpture of a plant, which can hire artists to physically modify the sculpture itself according to the rules collectively agreed in the smart contracts encoded in it.
All of these potentials of decentralised technologies represent an emerging research field. Together with other colleagues of the EU project P2PModels,[6] we are exploring some of these potentials and limitations in the context of the collaborative economy and, more precisely, on some of the models emerging around Commons-Based Peer Production.
Collaborative Economy and Commons-Based Peer Production
The collaborative economy is a growing socio-economic phenomenon in which individuals produce, exchange and consume services and goods, coordinating through online software platforms. It is an umbrella concept that encompasses different initiatives and significantly different forms are emerging; there are models where large corporations control the platform, thus ensuring its technologies and the knowledge held therein are proprietary and closed. Uber, a riding service, and AirBnB, a short-term lettings service, are perhaps the most well-known examples of such initiatives. They differ from models that revolve around Commons-Based Peer Production (CBPP), where individuals produce public goods by dispensing with hierarchical corporate structures and cooperating with their peers.[7] In these models, participants of the community govern the assets, freely sharing and developing technologies.[8] Some of the most well-known examples of the initiatives around such commons-based models are Wikipedia and GNU/Linux, a Free/Libre Open Source Software (FLOSS) operating system. Commons-based models of the collaborative economy are, however, extending to areas as broad as open science, urban commons, community networks, peer funding and open design.[9]
Three main characteristics are salient in the literature on CBPP.[10] Firstly, CBPP is marked by decentralisation, since authority resides in individual agents rather than a central organiser. Secondly, it is commons-based since CBPP communities make frequent use of common resources. These resources can be material, such as in the case of 3D printers shared in small-scale workshops known as Fab Labs; or immaterial, such as the wiki pages of Wikipedia or the source code in a FLOSS project. Thirdly, non-monetary motivations are prevalent in the community. These motivations are, however, commonly intertwined with extrinsic motivations resulting in a wide spectrum of forms of value operating in CBPP communities,[11] beyond monetary value.[12]
Guifi.net: An Example of a CBPP Community in Action
In order to extend the discussion of the affordances of decentralised technologies in CBPP, I will employ Guifi.net as an illustrative example. Guifi.net[13] is a community network: a participatory project whose goal is to create a free, open and neutral telecommunications network to provide access to the Internet. If you are reading this article online, you might be accessing it through a commercial Internet Service Provider. These are the companies which control the technical infrastructure you are using to connect to the Internet. They manage this infrastructure as a private good. The Guifi.net project, instead, manages this infrastructure as a commons. In other words, Guifi.net is organised around a CBPP model,[14] in which the network infrastructure is governed as a common good. Over the past 16 years, participants of Guifi.net have developed communitarian rules, legal licenses, technological tools and protocols which are constantly negotiated and implemented by the participants.
I have chosen to discuss the potentialities of blockchain drawing on Guifi.net, a community network, for two main reasons. Firstly, the most relevant type of commons governed in this case is shared infrastructure, such as fibre optic and routers. The governance of rival material goods, in contrast to the commons governance of non-rival goods such as source code or wiki pages, better matches the scope of the conversations which emerged during the symposium around architecture of the commons and the role played by participatory platforms.[15] Secondly, Guifi.net provides a large and complex case of governance of shared infrastructure. The growth experienced by Guifi.net’s infrastructure and community since the first pair of nodes were connected in a rural region of Catalonia in 2004 is significant. In their study of the evolution of governance in Guifi.net, Baig et al. reported a network infrastructure consisting of more than 28,500 operational nodes which cover a total length of around 50,000 km of links that are connected to the global Internet. This study refers to the period 2005–2015.[16] The latest statistics reported by Guifi.net state that there are more than 35,000 operational nodes and 63,000 km of links.[17] Beyond the infrastructure, the degree of participation in the community is also significant: more than 13,000 registered participants up to 2015, according to the aforementioned study, and more than 50,000 users of this community network connect on a day to day basis, as reported by the community at present.[18] Thus, Guifi.net provides a suitable scenario for the analysis of the affordances of decentralised technologies for commons governance.
Ostrom’s Principles and Affordances of Decentralised Technologies for Commons Governance
How do communities of peers manage to successfully govern common resources? The study of the organisational aspects of how common goods might be governed was traditionally focussed on the study of natural resources. This commons dilemma was explored by Hardin in his influential article “The Tragedy of the Commons”, whose ideas became the dominant view. In this article, Hardin states how resources shared by individuals acting as homo economicus (out of self-interest in order to maximise their own benefit) results in the depletion of the commons. The individuals’ interests enter into conflict with the group’s, and because they act independently according to their short-term interests, the result of the collective action depletes the commons.[19] As a consequence, in order to avoid this logic – “If I do not use it, someone else will”, which is not sustainable – it was necessary to manage these commons through either private ownership or centralised public administration.
Later on, Nobel laureate researcher Elinor Ostrom questioned and revisited “The Tragedy of the Commons”. In her work, she showed how under certain conditions commons can indeed be managed in a sustainable way by local communities of peers. Her approach took into account that individual agents do not operate in isolation, nor are they driven solely by self interest. Instead, she argued that communities communicate to build processes and rules, with different degrees of explicitation, that ensure their sustainability.[20] This hypothesis was supported by a meta-analysis of a wide range of case studies,[21] and has been confirmed in subsequent research.[22] As part of this work, she identified a set of principles for the successful management of these commons,[23] which has also been subsequently applied to the study of collaborative communities whose work is mediated by digital platforms, such as Wikipedia and FLOSS communities:[24]
1. Clearly defined community boundaries: in order to define who has rights and privileges within the community.
2. Congruence between rules and local conditions: the rules that govern behaviour or commons use in a community should be flexible and based on local conditions that may change over time. These rules should be intimately associated with the commons, rather than relying on a “one-size-fits-all” regulation.
3. Collective choice arrangements: in order to best accomplish congruence (with principle number 2), people who are affected by these rules should be able to participate in their modification, and the costs of alteration should be kept low.
4. Monitoring: some individuals within the community act as monitors of behaviour in accordance with the rules derived from collective choice arrangements, and they should be accountable to the rest of the community.
5. Graduated sanctions: community members actively monitor and sanction one another when behaviour is found to conflict with community rules. Sanctions against members who violate the rules are aligned with the perceived severity of the infraction.
6. Conflict resolution mechanisms: members of the community should have access to low-cost spaces to resolve conflicts.
7. Local enforcement of local rules: local jurisdiction to create and enforce rules should be recognised by higher authorities.
8. Multiple layers of nested enterprises: by forming multiple nested layers of organisation, communities can address issues that affect resource management differently at both broader and local levels.
What kind of affordances do decentralised technologies offer in the context of commons governance and, more concretely, with regards to Ostrom’s principles? Together with other colleagues,[25] we have identified six potential affordances to be further explored.
Firstly, tokenisation. This refers to the process of transforming the rights to perform an action on an asset into a transferable data element (named token) on the blockchain. For example, tokens can be employed to provide authorisation to access a certain shared resource. Tokens may also be used to represent equity, decision-making power, property ownership or labour certificates.[26]
Secondly, self-enforcement and formalisation of rules. These affordances refer to the process of embedding organisational rules in the form of smart contracts. As a result, there is an affordance for the self-enforcement of communitarian rules, such as those which regulate monitoring and graduated sanctions, as reflected in Ostrom’s principles 4 and 5. This encoding of rules also implies a formalisation, since blockchain technologies require these rules to be defined in ways that are unambiguously understood by machines. In other words, the inherent process of explicitation of rules related to the use of distributed technologies also provides opportunities to make these rules more available and visible for discussion, as noted in Ostrom’s principle 2.
Thirdly, autonomous automatisation: the process of defining complex sets of smart contracts which may be set up in such a way as to make it possible for multiple parties to interact with each other without human interaction. This is analogous to software communicating with other software today, but in a decentralised manner. DAOs are an example of autonomous automatisation as they could be self-sufficient to a certain extent. For instance, they could charge users for their services.[27]
Fourthly, decentralised technologies offer an affordance for the decentralisation of power over the infrastructure. In other words, they can facilitate processes of communalising the ownership and control of the technological artefacts employed by the community. They do this through the decentralisation of the infrastructure they rely on, such as collaboration platforms employed for coordination.
Fifthly, transparency: for the opening of organisational processes and the associated data, by relying on the persistency and immutability properties of blockchain technologies.
Finally, decentralised technologies can facilitate processes of codification of a certain degree of trust into systems which facilitate agreements between agents without requiring a third party. Figure 1 below provides a summary of the relationships between Elinor Ostrom’s principles and the aforementioned affordances.[28]

These congruences allow us to describe the impact that blockchain technologies could have on governance processes in these communities. These decentralised technologies could facilitate coordination, help to scale up commons governance or even be useful to share agreements and different forms of value amongst various communities in interoperable ways, as shown by Pazaitis et al..[29] An example of how such affordances might be explored in the context of CBPP can be found in community networks such as Guifi.net.
A DAO for Commons Governance of Shared Technical Infrastructure
Would it be possible to build a DAO that might help to coordinate collaboration and scale up cooperative practices, in line with Ostrom’s principles, in a community network such as Guifi.net? First of all, we need to identify the relationship between Ostrom’s principles and Guifi.net. We can find, indeed, a wide exploration of the relationship between Ostrom’s principles and the evolution in the self-organisational processes of Guifi.net in the work of Baig et al..[30] They document in detail how Guifi.net governs the infrastructure as a commons drawing on these principles, and provide a detailed analysis of the different components of the commons governance of the shared infrastructure in Guifi.net. Secondly, we need to define an initial point of analysis, and tentative interventions, in the form of one of the components of this form of commons governance. From all of these components, I will place the focus of analysis on the economic compensation system. The reason for selecting this system is twofold. On the one hand, it reflects the complexity behind commons governance and, thus, allows us to illustrate the aforementioned principles in greater depth. Secondly, it is an illustrative example of the potential of blockchain, as we shall see, to automatise and scale up various cooperative processes.
The economic compensation system of Guifi.net was designed as a mechanism to compensate imbalances in the uses of the shared infrastructure. Professional operators, for example, are requested to declare the expenditures and investments in the network. In alignment with Ostrom’s principle number 4, the use, expenditure and investments of operators are monitored, in this case by the most formal institution which has emerged in Guifi.net: the Guifi.net Foundation. The Foundation is a legal organisation with the goal to protect the shared infrastructure and monitor compliance with the rules agreed by the members of the community. The community boundaries, as in Ostrom’s principle number 1, are clearly defined and include several stakeholders.[31] Different degrees of commitment with the commons were defined as collective choice arrangements (principle number 3). These rules are, however, open to discussion through periodic meetings organised regionally, and adapted to the local conditions, in congruence with principle number 2. If any participant, such as an operator, misuses the resources or does not fulfill the principles, the individual is subject to graduated sanctions,[32] in alignment with principle number 5. As part of the compensation system, compensation meetups are organised locally to cope with conflict resolution, in congruence with principle 6. Principles 6 and 7 are also clearly reflected in the evolution of the governance of Guifi.net, although they are more closely associated with scalability.[33]
The compensation DAO could be formed by a set of local DAOs, whose rules are defined and modifiable exclusively by participants holding a token which demonstrates they belong to this node. These local DAOs could be deployed from templates, and could be modified at any point as a result of a discussion at the aforementioned periodic meetings held by local nodes and in congruence with the local conditions. Among the rules of the smart contracts composing these DAOs, participants may decide to define the different factors that are considered when discussing the local compensation system arrangements, as well as graduated sanctions in case of misuse of the common goods. These rules might be copied and adapted by some of the nodes facilitating the extension of the collaborative practices.
Some of the settings of these local DAOs could be dependent on a federal compensation DAO that defines general aspects. A mapping of the current logic could consist of reaching a certain degree of consensus between the participants in all of the nodes, but having this process approved by the members of the Foundation, who would hold a specific token. Examples of general aspects regulated by the federal DAO are the levels of commitment towards the commons of each operator, which is currently evaluated and monitored manually by the Foundation. General aspects such as this could be automatised in several ways therefore moving from manual assignations by the Foundation, as is currently the case, to automatically assigned tokens depending on the communitarian activities tracked in the platform. This is an example of a possible intervention to automatise certain collaborative practices assuming the current structure. Figure 1 below provides an overview of a preliminary design of a DAO for a compensation system mapping the current logics.

More disruptive tentative interventions could consist of the implementation of more horizontal governance logics which allow modifications of the rules at a federal level or to transform the rules that regulate the monitoring. These interventions, however, should be carefully co-designed together with those who participate in the day-to-day of these collectives. Our approach states that the development of decentralised tools which support commons governance should be undertaken as a gradual process to construct situated technology, with an awareness of the cultural context and aiming to incorporate particular social practices into the design of these decentralised tools.
This basic example of a DAO illustrates, on the one hand, the relationship with Ostrom’s principles: monitoring mechanisms, local collective choice arrangements, graduated sanctions and clear boundaries. These principles are sustained by the aforementioned affordances of blockchain for commons governance. For example, tokenisation with regards to providing permission as to who has the ability to participate in the choices locally and at a federal level and how, as well as the certification of the level of commitment to the commons; monitoring of the expenditures and reimbursements through the transparency provided by the blockchain; self-enforcement, formalisation and automatisation of the communitarian rules in the form of smart contracts. Another, more general, example of this is the increment in the degree of decentralisation of power over the platform because of the inherent decentralised properties of the technology itself. In this way, this could result in a partial shift of power over the platform from the Foundation towards the different nodes formed by the participants. Furthermore, as discussed, the fact that such rules are encoded in the form of configurations of smart contracts could facilitate the extension of practices and the development of new nodes; or even the deployment of alternative networks capable of operating as the former network, and reusing and adapting the encoded rules of the community while still using the shared infrastructure. Overall, further research of the role of decentralised technologies in commons governance offers, in this respect, a promising field of experimentation and exploration of the potential scalability of cooperative dynamics.
Discussion and Concluding Remarks
In this article I provided an overview and discussed an example of the affordances of blockchain technologies for commons governance. Concretely, I described such potentialities drawing on the example of a DAO to automatise some of the collaborative processes surrounding the compensation system of a community network: Guifi.net. Throughout this example, I aimed to illustrate, in more detail, the affordances of blockchain for commons governance which I presented during the symposium. The aim of this example is to illustrate how blockchain may facilitate the extension and scaling up of the cooperation practices of commons governance. Further explorations, more closely related to the architecture field, could explore the discussed affordances for commons governance with discrete design approaches that provide participatory frameworks for collective production.[34] In this respect, decentralised technologies offer opportunities of exploration to tackle challenges such as those identified by Sánchez[35] to define ways to allocate ownership, authorship and distribution of value without falling into extractivist practices.
A better understanding of the capabilities of blockchain technologies for commons governance will require, however, further empirical research. Examples of research questions which need to be addressed are those with regards to the boundaries of the discussed affordances. For example, with regards to tokenisation and formalisation of rules: which aspects should remain in/off the blockchain, or furthermore completely in/out of code?
Overall, CBPP communities provide radically differing values and practices when compared with those in markets. In this respect, the study of the potentialities and limitations of blockchain technologies in the context of the governance of CBPP communities offers an inspiring opportunity to take further steps on a research journey that has only just begun.
[1] S. Nakamoto,“Bitcoin: A Peer-to-Peer Electronic Cash System” (2008).
[2] M. Swan, Blockchain: Blueprint for a New Economy (Sebastopol, CA, USA: O’Reilly, 2015).
[3] N. Szabo, ”Formalizing and Securing Relationships on Public Networks, First Monday, 2, 9 (1997).
[4] See, for example, https://digix.global: a cryptocurrency backed by bars of gold in which the governance is mediated by a DAO, last accessed on 24th July 2019.
[5] See http://www.okhaos.com/plantoids/, last accessed on 24th July 2019.
[6] See https://p2pmodels.eu, last accessed on 2nd July 2019.
[7] Y. Benkler, The Wealth of Networks: How Social Production Transforms Markets and Freedom (2006); M. Bauwens, “The Political Economy of Peer Production,” CTheory 1, 12 (2005).
[8] M. Fuster-Morell, J. L. Salcedo, and M. Berlinguer. “Debate About the Concept of Value in Commons-Based Peer Production,” Internet Science (2016); Bauwens, Michel, and Alekos Pantazis. 2018. “The Ecosystem of Commons-Based Peer Production and Its Transformative Dynamics.” The Sociological Review, 66, 2 (2016), 302–19.
[9] V. Kostakis and M. Papachristou, “Commons-Based Peer Production and Digital Fabrication: The Case of a RepRap-Based, Lego-Built 3D Printing-Milling Machine” (2013); V. Niaros, V. Kostakis, and W. Drechsler, “Making (in) the Smart City: The Emergence of Makerspaces,” Telematics and Informatics (2017).
[10] A. Arvidsson, A. Caliandro, A. Cossu, M. Deka, A. Gandini, V. Luise, and G. Anselm, “Commons Based Peer Production in the Information Economy,” P2PValue (2016).
[11] C. Cheshire, and J. Antin, “The Social Psychological Effects of Feedback on the Production of Internet Information Pools,” Journal of Computer-Mediated Communication, 13, 1 (2008).
[12] M. Fuster-Morell, J. L. Salcedo, and M. Berlinguer, “Debate About the Concept of Value in Commons-Based Peer Production,” Internet Science (2016).
[13] See https://guifi.net, last accessed on 30th June 2019.
[14] R. Baig, R. Roca, F. Freitag, and L. Navarro, “Guifi.net, a Crowdsourced Network Infrastructure Held in Common,” Computer Networks: The International Journal of Computer and Telecommunications Networking, 90 (2015).
[15] J. Sánchez, “Architecture for the Commons: Participatory Systems in the Age of Platforms,” Architectural Design, 89, 2 (2019).
[16] R. Baig, R. Roca, F. Freitag, and L. Navarro. “Guifi.net, a Crowdsourced Network Infrastructure Held in Common,” Computer Networks: The International Journal of Computer and Telecommunications Networking, 90 (2015).
[17] Guifi.net. 2019. “Node Statistics,” Node Statistics Guifi.net (2019).
[18] Ibid.
[19] G. Hardin, “The Tragedy of the Commons. The Population Problem Has No Technical Solution; It Requires a Fundamental Extension in Morality,” Science 162, 3859 (1968), 1243–48.
[20] E. Ostrom, Governing the Commons: The Evolution of Institutions for Collective Action (Cambridge University Press, 1990).
[21] Ibid.
[22] E. Ostrom, “Understanding Institutional Diversity” (2009); M. Cox, G. Arnold, and S. Villamayor Tomás, “A Review of Design Principles for Community-Based Natural Resource Management” (2010).
[23] E. Ostrom, Governing the Commons: The Evolution of Institutions for Collective Action (Cambridge University Press, 1990), 88–102.
[24] F. B. Viégas, M. Wattenberg, and M. M. McKeon, “The Hidden Order of Wikipedia,” Online Communities and Social Computing, OCSC'07: Proceedings of the 2nd international conference on Online communities and social computing (2007).
[25] D. Rozas, A. Tenorio-Fornés, S. Díaz-Molina, and S. Hassan, “When Ostrom Meets Blockchain: Exploring the Potentials of Blockchain for Commons Governance,” SSRN Electronic Journal (2018), 8–20.
[26] S. Huckle and M. White, “Socialism and the Blockchain.” Future Internet, 8, 4 (2016), 49.
[27] P. De Filippi, and S. Hassan, “Blockchain Technology as a Regulatory Technology: From Code Is Law to Law Is Code,” First Monday, 21, 12 (2016).
[28] D. Rozas, A. Tenorio-Fornés, S. Díaz-Molina, and S. Hassan, “When Ostrom Meets Blockchain: Exploring the Potentials of Blockchain for Commons Governance,” SSRN Electronic Journal (2018), 21–22.
[29] A. Pazaitis, P. De Filippi, and V. Kostakis, “Blockchain and Value Systems in the Sharing Economy: The Illustrative Case of Backfeed,” Technological Forecasting and Social Change, 125 (2017), 105–15.
[30] R. Baig, R. Roca, F. Freitag, and L. Navarro. “Guifi.net, a Crowdsourced Network Infrastructure Held in Common,” Computer Networks: The International Journal of Computer and Telecommunications Networking, 90 (2015).
[31] Ibid.
[32] Ibid.
[33] See Baig et al. (2015) for further details.
[34] J. Sánchez, “Architecture for the Commons: Participatory Systems in the Age of Platforms,” Architectural Design, 89, 2 (2019).
[35] Ibid.