Search
Mailing List
Back to Top
Issue 39 G
04/10/2022
ISSN 2634-8578
Curated By:
2b60828d63@boxomail.live 2b60828d63@boxomail.live
2b60828d63@boxomail.live
Algorithmic, Algorithmic Neighbourhoods, architectural design theory and practice
Add to Basket
Share →
Contents:
Figure 1: Blender File on particle generation (IPFS hash : QmSCGBzHoeBYwSyHZeBVRN Pc3f3T5LkLaEq75AnynFkf6f).
Figure 1: Blender File on particle generation (IPFS hash : QmSCGBzHoeBYwSyHZeBVRN Pc3f3T5LkLaEq75AnynFkf6f).
Crypto: towards a New Political Economy in Architecture 
03/08/2022
Blockchain, Crypto, Cryptography, Deconstruction, Odysseus, peer economies, Political Economy
Theodore Dounas

t.dounas@rgu.ac.uk
Add to Issue
Read Article: 4229 Words

The paper presents a “primitives” approach to understanding the computational design enabled by blockchain technologies, as a new political economy for the architecture discipline. The paper’s motivation lies in exploring the challenges that exist for architects to understand blockchain, evidenced through the author’s multiple prototypes,[1,2,3,4] discussions, workshops and code writing with students and colleagues, but also in the fragmentation of the Architecture-Engineering-Construction (AEC) industry and the impermanence that computational design enhances in architecture.[5] These challenges, while situated within the confines of the discipline of computational design and architecture, are defined and affected by the challenges that exist within the wider AEC industry and its extractive relationship with the physical environment.  

Methodologically the paper is a philosophical and semantic exploration on the meaning of architecture in a decentralised context, considering its uncoupled nature with signs and design, and it sets a direction in which architectural practice needs to move, changing from an extractive to a non-extractive or circular nature. 

Blockchain: peer economies, trust and immutability, transparency, incentives for participation, and entropy 

A blockchain is a distributed computer network, where each computer node holds a copy of a distributed ledger that holds values.[6] Computationally, a Blockchain acts as both a state machine able to execute smart contracts,[7] i.e., software code that is the equivalent of an automatic vending machine, but also a continuous, immutable chain, built out of discrete blocks of information, each of which contains a cryptographic hash of the previous discrete block. Each block contains a series of transactions or changes to the distributed ledger, which in the discipline of architectural design can be a series of design synthetical actions, executed in a bottom-up fashion, and encoded into a block. Within a regular time interval, the blockchain network, though an incentivised participation system, selects the next block to be written to the ledger/chain. Due to the their nature, public, permissionless blockchains act as a medium of trust (trust machines) between agents that are not necessarily in concert or known to one another; are resilient in the sense that losing a large part of the network does not destroy the blockchain; are immutable because one cannot go back and delete information as by design block cryptographic hashes are embedded into the next one creating an immutable chain; and operate through cryptoeconomic incentives, i.e., economic mechanisms that incentivise, not always monetarily, behaviour that maintains or improves the system itself. Economically, a blockchain is a decentralised trust-machine that enables the creation of peer-to-peer economies via smart contracts, tokens and their computer protocols.[8] 

The first blockchain, the one invented in the bitcoin whitepaper,[9] has been designed as a replacement for centrally managed financial institutions. As such, blockchains, when pubic and permissionless, act as a medium of de-centralisation, i.e., a channel within which to engage with, where one does not need permission or approval beyond the limits and rules of the computer code that runs the blockchain.  

Blockchains encompass cryptography and its semantic discipline, immutability and entropy of information, continuity but also discreteness of information, and trust. Due to their decentralised nature, there is little room to understand blockchains as having affinity with architecture, the act of designing and building. In the following similes, however, I develop the parallels between architecture and blockchain, employing ideas from western and eastern literature. 

Applications that have promise within the blockchain space and that are distinctive compared to other similar or competing automation technologies are the creation of tokens, both fungible and non-fungible [10, 11] the formation of Decentralised Autonomous Organisations i.e., organisations that operate through the blockchain medium, and applications of decentralised finance. All these are built through the smart contracts, along with additional layers for interfaces and connectors between the blockchain and its external environment. Since the blockchain is an immutable record, it becomes even more important to ensure that data that passes and gets recorded on the blockchain is of a high quality or truthfulness. To ensure this takes place, the concept of an oracle is introduced. Oracles are trustworthy entities, operating in the exterior of a blockchain, made trustworthy through both incentivisation and disincentives, with the responsibility to feed data into blockchains. Parallel to blockchains, though, remain distributed filesystems, used for storing files, rather than data, in a decentralised manner. One such filesystem is the Interplanetary filesystem,[12] which operates via content rather than addressing: within IPFS we are looking for “what” rather than “where” as we do within the world wide web. Content on IPFS is also cryptographically signed with a cryptographic hash that makes the content unique and allows it to be found. For example, the following file from Blender has the IPFS hash:

Figure 1: Blender File on particle generation (IPFS hash : QmSCGBzHoeBYwSyHZeBVRN Pc3f3T5LkLaEq75AnynFkf6f).

Architecture as Cryptography 

Odysseus 

To explore the idea of blockchain as an infrastructure layer for architectural design, we will introduce Odysseus (Ulysses),[13] a much discussed hero and anti-hero of many turns or tricks (polytropos),[14] as his myth as a craftsman is solidified by architecture in the closing narration of The Odyssey. Inventiveness and the particular craft skills attributed to the character are compelling reasons to use him as a vehicle for creating parallels between blockchain and architectural design. 

Odysseus participated in the Trojan Wars, and was the key hero responsible for the Trojan Horse and the demise of Troy. His quest for “Nostos”, i.e. returning home, is documented in the second Homerian epic, Odyssey. The Odyssey describes the voyage of Odysseus to Ithaca, after the Troy war, where his ship and crew pass through a multitude of trials and challenges imposed by Poseidon, in a voyage that takes about 10 years. His crew and ship get lost but he is saved, and manages to return to the island of Ithaca.[13,14] Upon his return, he must face a final challenge. 

The olive tree bed 

During his absence of more than 20 years, his wife Penelope has been under pressure by the local aristocracy to re-marry, as Odysseus is considered lost at sea. Local aristocrats have converged at the palace and are in competition to marry Penelope. She has prudently deflected the pressure by saying that she will chose one of the aristocrats, the “Mnesteres”, after she finishes her textile weaving – which she delays by weaving during the day and unmaking it during the night. However, the day comes, when Odysseus arrives unrecognised at Ithaca, and is warned upon arrival that not all is as one would expect. At the same time, the Mnesteres, or suitors, have forced Penelope to set a final challenge to select the best of them. The challenge is to string and use the large bow that Odysseus had carved and made tensile, and shoot an arrow through the hanging hoops of a series of large battle axes. No other but Odysseus himself was able to tense the bow since he first crafted and used it, providing thus a formidable technical challenge. 

Odysseus enters the palace incognito, as a pig herder, and also makes a claim to the challenge, in concert with his son Telemachus. Penelope reacts at the prospect that a pig herder might win but is consoled by Telemachus who tells her to go to her rooms, where the poem finds her reminiscing of her husband. In the main hall of the palace, all the Mnesteres, in turn, fail to draw back and string the bow. Odysseus, however, tenses and strings the bow, passing the first challenge, then successfully uses the bow to shoot an arrow through the axes, providing the first sign that uncovers his identity. At the same time, he connects all the nodes of the battle axes in the line, by shooting his arrow through their metal rings, thus creating a chain. This is the second challenge, after the stringing of the bow that Odysseus must pass to prove he is the true king and husband of Penelope. 

The third challenge, remains: the elimination of all suitors. A battle ensues in which the Mnesteres are killed by Telemachus and Odysseus, and thus the third challenge is complete. 

The most architectonic metaphor of the poem takes place after the battle, at the moment Penelope needs to recognise her long lost husband, in rhapsody “Ψ”, i.e. the penultimate poem of Odyssey. She calls for a servant to move Odysseus’s bed outside its chamber and to prepare it so that he can rest. Upon hearing that, Odysseus immediately reacts in fury, claiming that moving the bed is an impossibility. The only person who could make the bed movable would be either an amazing craftsperson, or a god, as its base was made out of the root of an Olive tree, with its branches then used for the bed. Essentially the piece of furniture is immovable and immutable, it cannot be changed without being destroyed and it cannot be altered and taken out of the chamber without having its nature inadvertently changed – i.e., cutting the olive tree roots. 

Odysseus knows this as he was the one that constructed it, shaping its root from the body of the olive tree and crafting the bed. He then describes how he built the whole chamber around the bed. This knowledge acts as a crypto-sign that verifies his identity. Odysseus himself calls the information a “token” – a “sêma” – a sign that it is indeed him, as only he would know this sêma. In a sense, knowledge of this is the personal cryptographic key to the public cryptographic riddle that Penelope poses to verify his identity. 

The story acts as an architectonic metaphor for blockchain, in three layers. First, the token, both the information and the bed itself, cannot be taken out of its container (room) as its structure is interlinked with the material of the olive tree trunk and the earth that houses it. Second, it is Odysseus who is the architect of the crypto-immutability of the bed and the architecture around it, created by the most basic architectonic gestures: re-shaping nature into a construction. Thirdly, the intimacy between Penelope and Odysseus is encapsulated in the token of the bed, as knowledge of how the bed was made recreates trust between them – in the same kind of manner that blockchains become bearers of trust by encapsulating it cryptographically and encasing it in a third –medium, crafted, though, by a collective.  

The implication is that architectonic signs are cryptographically encased into their matter, and changing the physical matter changes the sign. Odysseus has created the first architectonic non-fungible token in physical form, where its meaning and its function and utility are interlinked through a cryptographic sema, in the same fashion that a non-fungible token exists through the cryptographic signature on a smart contract corresponding to a particular data structure. 

Deconstruction in Chinese 

Odysseus is not the only one who has created physical NFTs. Philosopher Byung-Chul Han describes in his book Shanzhai: Deconstruction in Chinese the relationship that exists in Asian cultures generally, but specifically in Chinese, between the master and the copy, where emulating or blatantly copying from the original is not seen as theft; instead, the form of the original is continually transformed by being deconstructed. [15] 

Byung-Chul Han presents a Chinese ink painting of a rock landscape, where a series of Chinese scholars have signed it using their jade seals and have scribbled onto it a poetic verse or two, as a parting gift to one of their friends leaving for another province. Within Chinese culture, the jade seal is the person, and the person is the jade seal. As such, the painting has now accumulated all the signatures and selves of the scholars, and has become unique in the same sense a non-fungible token is unique due to its cryptographic signature onto a smart contract. The difference from the simple non-fungible tokens that one finds by the thousand now on the internet, is that the Chinese painting scroll, according to Byung-Chul Han, is activated and becomes exclusive with the signature-seals and poems of the literati. It is a dynamic NFT, a unique object that is open to continuous addition, and exclusive and recursive interpretation.  

The act of creation, then, of the token, the unique sign, is the accumulation of all of the signatures of the scholars, whereby the painting cannot be reverted back to its original format; it is unique because it has been permanently changed. It is the same craft in Odysseus that takes the olive tree and makes into a bed, and then builds a room around the bed, an immobile, immutable sign, and its physical manifestation. The sêma of the significance of intimacy between Odysseus and Penelope is inextricable from the physical object of the bed, and the vector of change for the Chinese ink painting cannot return to its previous condition. 

This is where the similarities end though. While the craft is the same, in the Chinese ink scroll, the point of departure is not nature, but another artwork. The non-fungible token of the Chinese art scroll remains open to more additions and recursive poetry, new cryptographic signatures may be added to it, while the olive tree bed has a finality and a permanence. Odysseus changes nature to create his token, and the olive tree can never be the same. To create a bed and the foundations and the wall of the room, the tree needs to be transformed into architecture. The Chinese literati change a drawing, an artefact already in existence, which in the end remains still subject to further change. In the case of the olive tree, the hero is one, single, and the sêma revolves around his relationship with the world. For the Chinese literati and the Chinese ink scroll, the sêma is immutable towards the past but open to re-signing as a manner of recursive interpenetration. Significant mental shifts and attitudes is demanded to travel from crafting architecture like Odysseus, a lone genius who is king of his domain, to crafting architecture like a collective of Chinese literati, where a well balance collaboration is required from all. Both can be served by blockchain as a record of actions taken; however, it is only the collective, dynamic work open to continuing evolution that has the best future fit between blockchain and the discipline of architecture.  

“Zhen ji, an original, is determined not by the act of creation, but by an unending process” Byun Chul-Han  

The extractive nature of Architecture: Odysseus. 

The current dominant political economy of architecture is based on the Odysseus paradigm. The metabolism of the discipline is based on abundant natural resources and their transformation, and this parallels the irrational form of capitalist development.[16, 17] Essentially, the criticism shaped against the extractive nature of the discipline focuses on the ideological trap of continuously creating new designs and plans and sêmas, as Tafuri would have them, reliving the myth of Odysseus as a craftsperson, where every design is a prototype and every building is brand new, and where the natural environment is immutably transformed as the arrow of time moves forward. The repercussions of this stance are well documented in IPCC reports in terms of the carbon impact and waste production of the AEC industry.[18] 

In contrast, the “Space Caviar” collective posits that we should shift to a non-extractive architecture. They examine this shift via interviews with Benjamin Bratton, Chiara di Leone, and then Phineas Harper and Maria Smith. The focus within is a critical stance on the question of growth versus de-growth in the economy of architecture, where one needs a little bit more resolution to define the question in a positive term. Chiara di Leone correctly identifies design and economics as quasi-scientific disciplines and, as such, dismantles the mantra of de-growth as a homogenous bitter pill that we must all swallow. Instead, she proposes a spatial and geo-coupled economy, one that can take into account the local, decentralised aspects of each place and design an economy that is fit for that place. I would posit that as part of geo-coupled economy, an understanding of nature as a vector of a circular economy is needed 

Decentralisation is, of course, a core principle within the blockchain sociotechnical understanding, in the sense that participation in a blockchain is not regulated by institutions nor gatekeepers. However, before declaring it the absolute means to decentralisation, one needs to take a look at what is meant by decentralisation in economics and development, and the difference with decentralisation in blockchain, as there are differences in their meaning and essence that need alignment. 

Decentralisation and autonomy of local economies in the 70s 

Decentralisation as a term applied to the economy used to have a different meaning in the 70s. Papandreou, in his seminal book Paternalistic Capitalism, defines the decentralised economic process as a container for the parametric role of prices in the information system of a market economy.[19] In the same book, Papandreou, while interrogating the scientific dimensions of planning, calls for the decentralisation of power, in a regional, spatial function, rather than a functional one, after having set logical (in distinction to historical) rules for popular sovereignty and personal freedom. This is to counter the technocratic power establishment that emerges in representative democracy, as citizens provide legitimacy to the actions of the state. To further define decentralisation of power, he turns to regional planning and Greek visionary spatial planner Tritsis’ PhD thesis: “The third aim: decentralisation. This points to a world depending for its existence less on wheels and population uprootings and more on the harmonious relationship between man and his environment, social and natural”.[20] 

Based on this definition, Papandreou then builds the vision for a kind of governance consensus between decentralised regional units to form a “national” whole, with rules agreed and set between all units in a peer-to-peer basis. Within this, most importantly he calls for the liberal establishment of a guarantee of freedom of entry into occupations, in a kind of “integration of all forms of all forms of human work, of mental with manual, of indoors with outdoors” as envisioned by Tritsis [20]. Papandreou extends the vision of decentralisation in a global society and envisions the emergence of new poles of global power through regional decentralisation. As such, decentralisation used to mean something other than what it means within the context of blockchain – up until the first politics of “cypherpunk”. Decentralisation used to be a planning instrument and a political stance, rather than a technological strategy against the centralised power of established technocracies. Still, within the local, spatial geocoupling of economies, one can align the political decentralisation and the cypherpunk version of blockchain decentralisation, i.e. of no barriers to participation, of trust in the computer protocol, and the exclusion of authority of central political institutions, from which no one needs to ask permission. 

A new political economy for Architecture 

When one chains the spatial- and geo-coupled economy that Chiara di Leone proposes to decentralisation, both on the level of the politics of technocracies and the level of the operating system, i.e., the use of blockchains, it is possible to shape a new political economy in architecture, where computation regulates its heart. Encased within this shift is also a shift from the Odysseus craftsperson to the Chinese collective in terms of the “prototype” and our understanding of it. An economy where the artefact is open to recursive reinterpretation and is never finished can easily be transformed into a circular economy and adapted to minimise carbon. We have already prototyped early instances of collective digital factories for buildings,[21] where collectives of architects and digital design agents are incentivised through smart contracts to minimise the embodied and operational carbon impact of buildings: simply put the design teams earns in proportion to the increase of building performance and decrease in environmental impacts. 

To be able to create this regenerative renaissance for the discipline we need to make a series of changes to the manner in which the discipline is practised and taught. First, to integrate the function of the architect not only as the designer but as that of the orchestrator of the whole AEC industry. This requires that we abandon the notion of artistry, and embrace the notion of craft and engineering, including an understanding of materials and the economy. Second, to develop the infrastructure, products and services that can make that happen, where we also assume the responsibility and, why not, the liability for that integration. These first two actions will reverse the trend of abandoning the space of architecture to consultants where the erosion of our integrity has led to the glorification of form as our sole function. Thirdly, to shift our attention from star practices to collectives, as we embrace practices where wider stakeholders are considered. Odysseus needs to morph into a collective, where the artefact of architecture is conceived as ever changing, ever evolving, into circular thinking and economies. This might mean that alternative forms of practice emerge, where younger, more inclusive minds have more of a command and say on the purpose of an architecture company (and not a firm). Fourth, in the same pivot we as architects should reclaim the space lost, to embrace rigorously the new tools of the craft in the digital realm. It is not by chance that the title for senior programmers and digital network professionals is that of “architect”, as there is no other word that can specifically describe the people who orchestrate form-function-structure with one gesture. The age of machine-learning generative systems performing the trivial repetition of an architect is already here.  

Still, the automation we should embrace as a fifth point, since it allows the shaping and design of circular and peer-to-peer economies, is that of blockchain. This is the true Jiujitsu defence to the capitalist growth-at-all costs mantra.[22] Unless we embrace different, local, circular economies, we will not be able to effect the change we need in the discipline – and this also means that we might not necessarily need to be naive and simplistic about carbon impacts, for example by declaring that timber is always better than concrete. To embrace the automation of cryptoeconomics though, we need to first abandon the romantic idea of the architect as the sketch artist and embrace the idea of the architect as a collaborative economist. Only then will we be able to define ourselves the conditions for a regenerative architecture, in a decentralised, spatial-human-geo-coupled manner. 

References 

[1] T. Dounas, W. Jabi, D. Lombardi, “Non-Fungible Building Components – Using Smart Contracts for a Circular Economy in the Built Environment”, Designing Possibilities, SIGraDi, ubiquitous conference, XXV International conference of the Ibero-American society of digital Graphics (2021). 

[2] T. Dounas, W. Jabi, D. Lombardi, “Topology Generated Non-Fungible Tokens – Blockchain as infrastructure for a circular economy in architectural design”, Projections, 26th international conference of the association for Computer-Aided Architectural Design research in Asia, CAADRIA, Hong Kong, (2021).

[3] D. Lombardi, T. Dounas, L.H. Cheung, W. Jabi, “Blockchain for Validating the Design Process”, SIGraDI (2020), Medellin.

[4] T. Dounas, D. Lombardi, W. Jabi, ‘Framework for Decentralised Architectural Design:BIM and Blockchain Integration’, International Journal of Architectural Computing, Special issue eCAADe+SiGraDi “Architecture in the 4th Industrial Revolution” (2020) https://doi.org/10.1177/1478077120963376.

[5] T. Maver, “CAAD’s Seven Deadly Sins”, Sixth International Conference on Computer-Aided Architectural Design Futures [ISBN 9971-62-423-0] Singapore, 24-26 September 1995, pp. 21-22.

[6] Ethereum.Org, “Ethereum Whitepaper”, accessed 27 January 2022, https://ethereum.org. 

[7] N. Szabo, (1997): “Formalizing and Securing Relationships on Public Networks”, accessed 27 January 2022.  

[8] G. Wood, “Ethereum, a secure decentralised generalised transaction layer” (2022), https://ethereum.github.io/yellowpaper/paper.pdf

[9] S. Nakamoto, 2008, “Bitcoin: A Peer-to-Peer Electronic Cash System” (2008), originally at http://www.bitcoin.org/bitcoin.pdf.

[10] F. Vogelsteller, V. Buterin, EIP-20 Token Standard, https://eips.ethereum.org/EIPS/eip-20 

[11] W. Entriken, D. Shirley, J. Evans, N. Sachs, EIP-721 Token Standard, https://eips.ethereum.org/EIPS/eip-721

[12] Interplanetary filesystem documentation, https://docs.ipfs.io/ 

[13] Homer, E. Wilson trans., Odyssey (New York: W. W. Norton & Company, 2018) 

[14] Ζ. Όμηρος, Σιδέρης, Οδύσεια (Οργανισμός Εκδόσεως Διδακτικών βιβλίων Αθήνα, 1984).

[15] Byung-Chul Han, Deconstruction in Chinese, Translated by P. Hurd (Boston, MA: MIT press, 2017).

[16] Space Caviar collective, Non-Extractive Architecture, on designing without depletion (Venice: Sternberg Press, 2021).

[17] V.P. Aureli, “Intellectual Work and Capitalist Development: Origins and Context of Manfredo Tafuri’s Critique of Architectural Ideology”, the city as a project, http://thecityasaproject.org/2011/03/pier-vittorio-aureli-manfredo-tafuri/ March 2011.

[18]  P.R. Shukla, J. Skea, R. Slade, A. Al Khourdajie, R. van Diemen, D. McCollum, M. Pathak, S. Some, P. Vyas, R. Fradera, M. Belkacemi, A. Hasija, G. Lisboa, S. Luz, J. Malley (eds.), IPCC, 2022: Climate Change 2022: Mitigation of Climate Change. Contribution of Working Group III to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change (Cambridge, UK and New York, USA: Cambridge University Press, 2022) doi: 10.1017/9781009157926.

[19] A.G. Papandreou, Paternalistic Capitalism (Minneapolis: University of Minnesota Press, 1972).

[20] A. Tritsis, “The nature of planning regions” unpublished PhD thesis (Illinois Institute of Technology, Chicago, 1969).

[21] T. Dounas, D. Lombardi, W. Jabi, [2022] “Collective Digital Factories for Buildings”, T. Dounas, D. Lombardi, Ed., Blockchain for Construction (Singapore: Springer – Verlag, 2022) ISBN 9811937583.

[22] B. Tschumi, “Architects act as mediators between authoritarian power, or capitalist power, and some sort of humanistic aspiration. The economic and political powers that make our cities and our architecture are enormous. We cannot block them, but we can use another tactic, which I call the tactic of Judo, that is, to use the forces of one’s opponent in order to defeat it and transform it into something else … To what extent can we move away from a descriptive critical mode to a progressive, transformative mode for architecture?” Peter Eisenman and Cynthia Davidson, eds, anyplace symposium, ANY corporation, Montreal (1994).

Suggest a Tag for this Article
image source: Cantrell, Martin, Ellis 2017
image source: Cantrell, Martin, Ellis 2017
Wild Disequilibria 
Climate solutions, Climatic Energy, cognitive tools, Ecological Autonomy, landscape futures
Marantha Dawkins, Bradley Cantrell

mmd5mk@virginia.edu
Add to Issue
Read Article: 2327 Words

Climatic Energy and Ecological Autonomy 

There is no way back to the climate that we once knew: “our old world, the one that we have inhabited for the last 12,000 years, has ended”.[1] Accepting this end presents an opportunity to reframe considerations of risk, indeterminacy, and danger as questions of restructuring and rewilding; shifting the discussion of global warming from a matter of a scarcity of resources to an abundance of energy that can kick-start landscape futures. 

To engage this future, it is critical to set up some terms for how design will engage with the multitude of potential climates before us. Rather than working preventatively by designing solutions that are predicated on the simplification of the environment by models, we advocate for an experimentalism that is concerned with the proliferation of complexity and autonomy in the context of radical change. Earth systems are moving hundreds to thousands of times faster than they did when humans first documented them. This acceleration is distributed across such vast space and time scales that the consequences are ubiquitous but also unthinkable, which sets present-day Earth out of reach of existing cognitive tools. For example, twenty- to fifty-year decarbonisation plans are expected to solve problems that will unfold over million-year timescales.[2] These efforts are well-intentioned but poorly framed; in the relentless pursuit of a future that looks the same as the past, there is a failure to acknowledge that it is easier to destroy a system than it is to create one, a failure to acknowledge the fool’s errand of stasis that is embodied in preservation, and most importantly, a failure to recognise that climate change is not a problem to be solved.[3] Climate “solutions” are left conceptually bankrupt when they flatten complex contexts into one-dimensional problem sets that are doomed by unknowable variability. From succession, to extinction, to ocean biochemistry, to ice migration; our understanding of environmental norms has expired.[4] 

The expiration of our environmental understanding is underlined by the state of climate adaptation today – filled with moving targets, brittle infrastructures, increasing rates of failure, and overly complicated management regimes. These symptoms illustrate the trouble contemporary adaptation has escaping the cognitive dissonance of the manner in which knowledge about climate change is produced: the information has eclipsed its own ideological boundaries. This eclipse represents a crisis of knowledge, and therefore must give rise to a new climatic form. Changing how we think and how we see climatic energy asks us to make contact with the underlying texture and character of this nascent unruliness we find ourselves in, and the wilds that it can produce. 

Earth’s new wilds will look very different from the wilderness of the past. Classical wilderness is characterised by purity: it is unsettled, uncultivated, and untouched. But given the massive reshaping of ecological patterns and processes across the Earth, wilderness has become less useful, conceptually. Even in protected wilderness areas, “it has become a challenge to sustain ecological patterns and processes without increasingly frequent and intensive management interventions, including control of invading species, management of endangered populations, and pollution remediation”.[5] Subsequently, recent work has begun to focus less on the pursuit of historical nature and more on promoting ecological autonomy.[6, 7, 8] Wildness, on the other hand, is undomesticated rather than untouched. The difference between undomesticated and untouched means that design priorities change from maintaining a precious and pure environment to creating plural conditions of autonomy and distributed control that promote both human and non-human form. 

Working with wildness requires new ways of imagining and engaging futurity that operate beyond concepts of classical earth systems and the conventional modelling procedures that re-enact them, though conventional climate thinking, especially with the aid of computation, has achieved so much: “everything we know about the world’s climate – past, present, future – we know through models”.[9] Models take weather, which is experiential and ephemeral, abstract it into data over long periods of time, and assemble this data into patterns. Over time, these patterns have become increasingly dimensional. This way of understanding climate has advanced extremely quickly over the past few decades, enough that we can get incredibly high-resolution pictures (like the one below, which illustrates how water temperature swirls around the earth). Climate models use grids to organise their high-resolution, layered data and assign it rules about how to pass information to neighbouring cells. But the infinite storage capacity of the grid cells and the ways they are set up to handle rules and parameters create a vicious cycle, by enabling exponential growth toward greater and greater degrees of accuracy. Models get bigger and bigger, heavier and heavier, with more and more data; operating under the assumption that collecting enough information will eventually lead to the establishment of a perfect “control” earth,[10] and to an earth that is under perfect control. But this clearly isn’t the case, as for these models, more data means more uncertainty about the future. This is the central issue with the traditional, bottom-up climate knowledge that continues to pursue precision. It produces ever more perfect descriptions of the past while casting the future as more and more obscene and unthinkable. In other words, in a nonlinear world, looking through the lens of these bottom-up models refracts the future into an aberration.[11] 

Figure 1 – Global ocean temperatures modeled at Los Alamos National Labs illustrate how heat travels in swirling eddies across the globe. Image source: Los Alamos National Laboratories.

The technological structure of models binds us to a bizarre present. It is a state which forecloses the future in the same way that Narcissus found himself bound to his own reflection. When he saw his reflection in a river, he “[mistook] a mere shadow for a real body” and found himself transfixed by a “fleeting image”.[12] The climatic transfixion is the hypnotism of the immediate, the hypothetically knowable, which devalues real life in favour of an imaginary, gridded one. We are always just a few simulations from perfect understanding and an ideal solution. But this perfection is a form of deskilling which simulates not only ideas but thinking itself. The illusion of the ideal hypothetical solution, just out of reach, allows the technical image to operate not only as subject but as project;[13] a project of accuracy. And the project of making decisions about accuracy in models then displaces the imperative of making decisions about the environments that the models aim to describe by suspending us in the inertia of a present that is accumulating more data than it can handle. 

It is important to take note of this accumulation because too much information starts to take on its own life. It becomes a burden beyond knowledge,[14] which makes evident that “without forgetting it is quite impossible to live at all”.[15] But rather than forget accumulated data and work with the materiality of the present, we produce metanarratives via statistics. These metanarratives are a false consciousness. Issues with resolution, boundary conditions, parameterization, and the representation of physical processes represent technical barriers to accuracy, but the deeper problem facing accuracy is the inadequacy of old data to predict new dynamics. For example, the means and extremes of evapotranspiration, precipitation and river discharge have undergone such extreme variation due to anthropogenic climate change that fundamental concepts about the behaviour of earth systems for fields like water resource management are undergoing radical transformation.[16] Changes like this illustrate how dependence upon the windows of variability that statistics produce is no longer viable. This directly conflicts with the central conceit of models: that the metanarrative can be explanatory and predictive. In his recently published book, Justin Joque challenges the completeness of the explanatory qualities of statistics by underlining the conflicts between its mathematical and metaphysical assumptions.[17] He describes how statistics (and its accelerated form, machine learning) are better at describing imaginary worlds than understanding the real one. Statistical knowledge produces a way of living on top of reality rather than in it. 

Figure 2 – An illustration of how a climate model breaks the Earth surface and atmosphere into rectangular chunks within which data is stored, manipulated, and passed on to neighboring cells. Image source: ERA-Interim Archive.

The shells of modelled environments miss the materiality, the complexity and the energy of an ecosystem breaking apart and restructuring itself. The phase of a system that follows a large shift is known as a “back loop” in resilience ecology,[18, 19] and is an original and unstable period of invention that is highly contingent upon the materials left strewn about in the ruins of old norms. For ecological systems in transition, plant form, geological structure, biochemistry and raw materiality matter. These are landscape-scale issues that are not described in the abstractions of parts per million. High-level knowledge of climate change, while potentially relevant for some scales of decision-making, does not capture the differentiated impacts of its effects that are critical for structuring discussions around the specific ways that environments will grow and change, degrade or complexify through time. 

This is where wilds can play a role in structuring design experimentation. Wildness is unquestionably of reality, or a product of the physical world inhabited by corporeal form. Wilds as in situ experiments become model forms, which have a long epistemological history as a tool for complex and contingent knowledge. Physicists (and, here, conventional climate modellers) look to universal laws to codify, explain and predict events, but because medical and biological scientists, for example, do not have the luxury of stable universalism, they often use experiments as loose vehicles for projection. By “repeatedly returning to, manipulating, observing, interpreting, and reinterpreting certain subjects—such as flies, mice, worms, or microbes—or, as they are known in biology, ‘model systems’”, experimenters can acquire a reliable body of knowledge grounded in existing space and time.[20] This is how we position the project of wildness, which can be found from wastewater swamps, to robotically maintained coral reefs, to reclaimed mines and up-tempo forests. Experimental wilds, rather than precisely calculated infrastructures, have the potential to do more than fail at adapting to climate: they can serve “not only as points of reference and illustrations of general principles or values but also as sites of continued investigation and reinterpretation”.[21] 

There is a tension between a humility of human smallness and a lunacy in which we imagine ourselves engineering dramatic and effective climate fixes using politics and abstract principles. In both of these cases, climate is framed as being about control: control of narrative, control of environment. This control imaginary produces its own terms of engagement. Because its connections to causality, accuracy, utility, certainty and reality are empty promises, modelling loses its role as a scientific project and instead becomes a historical, political and aesthetic one. When the model is assumed to take on the role of explaining how climate works, climate itself becomes effectively useless. So rather than thickening the layer of virtualisation, a focus on wild experiments represents a turn to land and to embodied changes occurring in real time. To do this will require an embrace of aspects of the environment that have been marginalised, such as expanded autonomy, distributed intelligence, a confrontation of failure, and pluralities of control. This is not a back-to-the-earth strategy, but a focus on engagement, interaction and modification; a purposeful approach to curating climatic conditions that embraces the complexity of entanglements that form the ether of existence. 

References

[1] M. Davis, “Living on the Ice Shelf”, Guernica.org https://www.guernicamag.com/living_on_the_ice_shelf_humani/, (accessed May 01, 2022). 

[2] V. Masson-Delmotte, P. Zhai, A. Pirani, S.L. Connors, C. Péan, S. Berger, N. Caud, Y. Chen, L. Goldfarb, M.I. Gomis, M. Huang, K. Leitzell, E. Lonnoy, J.B.R. Matthews, T.K. Maycock, T. Waterfield, O. Yelekçi, R. Yu, and B. Zhou (eds.)]. IPCC, 2021: Climate Change 2021: The Physical Science Basis. Contribution of Working Group I to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change (Cambridge University Press, Cambridge, UK and New York, USA, 2021) doi:10.1017/9781009157896. 

[3] R, Holmes, “The problem with solutions”, Places Journal (2020). 

[4] V. Masson-Delmotte, P. Zhai, A. Pirani, S.L. Connors, C. Péan, S. Berger, N. Caud, Y. Chen, L. Goldfarb, M.I. Gomis, M. Huang, K. Leitzell, E. Lonnoy, J.B.R. Matthews, T.K. Maycock, T. Waterfield, O. Yelekçi, R. Yu, and B. Zhou (eds.)]. IPCC, 2021: Climate Change 2021: The Physical Science Basis. Contribution of Working Group I to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change (Cambridge University Press, Cambridge, UK and New York, USA, 2021) doi:10.1017/9781009157896. 

[5] B. Cantrell, L.J. Martin, and E.C. Ellis, “Designing autonomy: Opportunities for new wildness in the Anthropocene”, Trends in Ecology & Evolution 32.3 (2017), 156-166. 

[6] Ibid. 

[7] R.T. Corlett, “Restoration, reintroduction, and rewilding in a changing world”, Trends in Ecology & Evolution 31 (2016), 453–462 

[8] J. Svenning, et al., “Science for wilder Anthropocene: Synthesis and future directions for trophic rewilding research” Proceedings of the National Academy of Sciences 113 (2015), 898–906 

[9] P. N. Edwards, A vast machine: Computer models, climate data, and the politics of global warming (MIT Press, Cambridge, 2010). 

[10] P. N. Edwards, “Control earth”, Places Journal (2016). 

[11] J. Baudrillard, Cool Memories V: 2000-2004, (Polity, Oxford, 2006). 

[12] Ovid, Metamorphoses III, (Indiana University Press, Bloomington, 1955), 85 

[13] B. Han, Psychopolitics: Neoliberalism and new technologies of power, (Verso Books, New York, 2017). 

[14] B. Frohmann, Deflating Information, (University of Toronto Press, Toronto, 2016). 

[15] F. Nietzsche, On the Advantage and Disadvantage of History for Life, (1874). 

[16] P. C. D. Milly, et al. “Stationarity is dead: whither water management?”, Science 319.5863 (2008), 573-574. 

[17] J. Joque, Revolutionary Mathematics: Artificial Intelligence, Statistics and the Logic of Capitalism, (Verso Books, New York, 2022). 

[18] Gunderson and Holling, 2001; and Holling, “From complex regions to complex worlds”, Ecology and Society, 9, 1 (2004), 11. 

[19] S. Wakefield, Anthropocene Back Loop (Open Humanities Press, 2020). 

[20] A. N. H. Creager, et al., eds. Science without laws: model systems, cases, exemplary narratives (Duke University Press, Durham, 2007). 

[21] Ibid 

Suggest a Tag for this Article
Figures 12 – Planet Garden v.1.
Figures 12 – Planet Garden v.1.
Games and Worldmaking 
consensus reality, games, mediascape, videogames, Virtual, worldmaking
Damjan Jovanovic

damjan@dmjn.net
Add to Issue
Read Article: 4014 Words
Fig. 1 – Planet Garden v.1 screenshot, early game state

Worldmaking  

We live in a period of unprecedented proliferation of constructed, internally coherent virtual worlds, which emerge everywhere, from politics to video games. Our mediascape is brimming with rich, immersive worlds ready to be enjoyed and experienced, or decoded and exploited. One effect of this phenomenon is that we are now asking fundamental questions, such as what “consensus reality” is and how to engage with it. Another effect is that there is a need for a special kind of expertise that can deal with designing and organising these worlds – and that is where architects possibly have a unique advantage. Architectural thinking, as a special case of visual, analogy-based synthetic reasoning, is well positioned to become a crucial expertise, able to operate on multiple scales and in multiple contexts in order to map, analyse and organise a virtual world, while at the same time being able to introduce new systems, rules and forms to it.[1] 

A special case of this approach is something we can name architectural worldmaking,[2] which refers broadly to practices of architectural design which wilfully and consciously produce virtual worlds, and understand worlds as the main project of architecture. Architects have a unique perspective and could have a say in how virtual worlds are constructed and inhabited, but there is a caveat which revolves around questions of agency, engagement and control. Worldmaking is an approach to learning from both technically-advanced visual and cultural formats such as video games, as well as scientific ways of imaging and sensing, in order to be able to construct new, legitimate, and serious ways of seeing and modelling. 

These notions are central to the research seminar called “Games and Worldmaking”, first conducted by the author at SCI-Arc in summer of 2021, which focused on the intersection of games and architectural design, and foregrounded systems thinking as an approach to design. The seminar is part of the ongoing Views of Planet City project, in development at SCI-Arc for the Pacific Standard Time exhibition, which will be organised by the Getty Institute in 2024. In the seminar, we developed the first version of Planet Garden, a planetary simulation game, envisioned to be both an interactive model of complex environmental conditions and a new narrative structure for architectural worldmaking.  

Planet Garden is loosely based on Edward O. Wilson’s “Half-Earth” idea, a scenario where the entire human population of the world occupies a single massive city and the rest is left to plants and animals. The Half Earth is an important and very interesting thought experiment, almost a proto-design, a prompt, an idea for a massive, planetary agglomeration of urban matter which could liberate the rest of the planet to heal and rewild.  

The question of the game was, how could we actually model something like that? How do we capture all that complexity and nuance, how do we figure out stakes and variables and come up with consequences and conclusions? The game we are designing is a means to model and host hugely complex urban systems which unravel over time, while being able to legibly present an enormous amount of information visually and through the narrative. As a format, a simulation presents different ways of imaging the World and making sense of reality through models. 

The work on game design started as a wide exploration of games and precedents within architectural design and imaging operations, as well as abstract systems that could comprise a possible planetary model. The question of models and modelling of systems comes at the forefront and becomes contrasted to existing architectural strategies of representation.

Mythologizing, Representing and Modelling 

Among the main influences of this project were the drawings made by Alexander von Humboldt, whose work is still crucial for anyone with an interest in representing and modelling phenomena at the intersection of art and science.[3] If, in the classical sense, art makes the world sensible while science makes it intelligible, these images are a great example of combining these forms of knowledge. Scientific illustrations, Humboldt once wrote, should “speak to the senses without fatiguing the mind”.[4] His famous illustration of Chimborazo volcano in Ecuador shows plant species living at different elevations, and this approach is one of the very early examples of data visualisation, with an intent of making the world sensible and intelligible at the same time. These illustrations also had a strong pedagogical intent, a quality we wanted to preserve, and which can serve almost as a test of legibility.

Figure 2 – Alexander von Humboldt, Chimborazo volcano.

The project started with a question of imaging a world of nature in the Anthropocene epoch. One of the reasons it is difficult to really comprehend a complex system such as the climate crisis is that it is difficult to model it, which also means to visually represent it in a legible way which humans can understand. This crisis of representation is a well-known problem in literature on the Anthropocene, most clearly articulated in the book Against the Anthropocene, by T.J. Demos.[5] 

We do not yet have the tools and formats of visualising that can fully and legibly describe such a complex thing, and this is, in a way, also a failure of architectural imagination. The standard architectural toolkit is limited and also very dated – it is designed to describe and model objects, not “hyperobjects”. One of the project’s main interests was inventing new modalities of description and modelling of complex systems through the interactive software format, and this is one of the ideas behind the Planet Garden project.  

Contemporary representational strategies for the Anthropocene broadly fall into two categories, those of mythologising or objectivising. The first approach can be observed in the work of photographers such as Edward Burtynsky and Louis Helbig, where the subject matter of environmental disaster becomes almost a new form of the aesthetic sublime. The second strategy comes out of the deployment and artistic use of contemporary geospatial imaging tools. As is well understood by critics, contemporary geospatial data visualisation tools like Google Earth are embedded in a specific political and economic framework, comprising a visual system delivered and constituted by the post–Cold War and largely Western-based military-state-corporate apparatus. These tools offer an innocent-seeming picture that is in fact a “techno-scientific, militarised, ‘objective’ image”.[6] Such an image displaces its subject and frames it within a problematic context of neutrality and distancing. Within both frameworks, the expanded spatial and temporal scales of geology and the environment exceed human and machine comprehension and thus present major challenges to representational systems.  

Within this condition, the question of imaging – understood here as making sensible and intelligible the world of the Anthropocene through visual models – remains, and it is not a simple one. Within the current (broadly speaking) architectural production, this topic is mostly treated through the “design fiction” approach. For example, in the work of Design Earth, the immensity of the problem is reframed through a story-driven, narrative approach which centres on the metaphor, and where images function as story illustrations, like in a children’s book.[7] Another approach is pursued by Liam Young, in the Planet City project,[8] which focuses on video and animation as the main format. In this work, the imaging strategies of commercial science fiction films take the main stage and serve as anchors for the speculation, which serves a double function of designing a new world and educating a new audience. In both cases, it seems, the focus goes beyond design, as these constructed fictions stem from a wilful, speculative exaggeration of existing planetary conditions, to produce a heightened state which could trigger a new awareness. In this sense, these projects serve a very important educational purpose, as they frame the problem through the use of the established and accepted visual languages of storybooks and films.  

The key to understanding how design fictions operate is precisely in their medium of production: all of these projects are made through formats (collage, storybook, graphic novel, film, animation) which depend on the logic of compositing. Within this logic, the work is made through a story-dependent arrangement of visual components. The arrangement is arbitrary as it depends only on the demands of the story and does not correspond to any other underlying condition – there is no model underneath. In comparison, a game such as, for example, SimCity is not a fiction precisely because it depends on the logic of a simulation: a testable, empirical mathematical model which governs its visual and narrative space. A simulation is fundamentally different from a fiction, and a story is not a model. 

This is one of the reasons why it seems important to rethink the concept of design fiction through the new core idea of simulation.[9] In the book Virtual Worlds as Philosophical Tools, Stefano Gualeni traces a lineage of thinking about simulations to Espen Aarseth’s 1994 text called Hyper/Text/Theory, and specifically to the idea of cybertextuality. According to this line of reasoning, simulations contain an element not found in fiction and thus need an ontological category of their own: “Simulations are somewhere between reality and fiction: they are not obliged to represent reality, but they have an empirical logic of their own, and therefore should not be called fictions.”[10] This presents us with a fundamental insight into the use of simulations as the future of architectural design: they model internally coherent, testable worlds and go beyond mere fiction-making into worldmaking proper. 

Simulations, games and systems 

In the world of video games, there exists a genre of “serious” simulation games, which comprises games like Maxis software’s SimCity and The Sims, as well as some other important games like Sid Meier’s Civilization and Paradox Studio’s Stellaris. These games are conceptually very ambitious and extremely complex, as they model the evolution of whole societies and civilisations, operate on very long timescales, and consist of multiple nested models that simulate histories, economies and evolutions of different species at multiple scales. One important feature and obligation of this genre is to present a coherent, legible image of the world, to give a face to the immense complexity of the model. The “user interface” elements of these kinds of games work together to tell a coherent story, while the game world, rendered in full 3D in real time, provides an immersive visual and aesthetic experience for the player. Contrary to almost any other type of software, these interfaces are more indebted to the history of scientific illustration and data visualisation than they are to the history of graphic design. These types of games are open-ended and not bound to one goal, and there is rarely a clear win state.  

Figure 3 – SimEarth main user interface with theGaia window.

Another feature of the genre is a wealth of underlying mathematical models, each providing for the emergence of complexity and each carrying its own assumptions and biases. For example, SimCity is well known (and some would say notorious) for its rootedness in Jay Forrester’s Urban Dynamics approach to modelling urban phenomena, which means that its mathematical model delivers very specific urban conditions – and ultimately, a very specific vision of what a city is and could be.[11] One of the main questions in the seminar became how we might update this approach on two fronts: by rethinking the mathematical model, and by rethinking urban assumptions of the conceptual model. 

The work of the game designer Will Wright, the main designer behind the original SimCity, as well as The Sims and Spore, is considered to be at the origin of simulation games as a genre. Wright has developed a vast body of knowledge on modelling simulations, some of which he presented in his 2003 influential talk at the Game Developers Conference (GDC), titled “Dynamics for Designers”.[12] In this talk, Wright outlines a fully-fledged theory of modelling of complex phenomena for interactivity, focusing on topics such as “How we can use emergence to model larger possibility spaces with simpler components”. Some of the main points: science is a modelling activity, and until now, it has used traditional mathematics as its primary modelling method. This has some limits when dealing with complex dynamic and emergent systems. Since the advent of the computer, simulation has emerged as an alternative way of modelling. These are very different: in Wright’s view, maths is a more linear process, with complex equations; simulation is a more parallel process with simpler components interacting together. Wright also talks about stochastic (random probability distribution) and Monte Carlo (“brute force”) methods as examples of the simulation approach. 

Figure 4 – SimEarth civilisation model with sliders.

Wright’s work was a result of a deep interest in exploring how non-linear models are constructed and represented within the context of interactive video games, and his design approach was to invent novel game design techniques based directly on System Dynamics, a discipline that deals with the modelling of complex, unpredictable and non-linear phenomena. The field has its roots in the cybernetic theories of Norbert Wiener, but it was formalised and created in the mid-1950s by Professor Jay Forrester at MIT, and later developed by Donella H. Meadows in her seminal book Thinking in Systems.[13]  

System dynamics is an approach to understanding the non-linear behaviour of complex systems over time using stocks, flows, internal feedback loops, table functions and time delays.[14,15] Forrester (1918–2016) was an American computer engineer and systems scientist, credited as the founding father” of system dynamics. He started by modelling corporate supply chains and went on to model cities by describing “the major internal forces controlling the balance of population, housing and industry within an urban area”, which he claimed could “simulate the life cycle of a city and predict the impact of proposed remedies on the system”.[16] In the book Urban Dynamics, Forrester had turned the city into a formula with just 150 equations and 200 parameters.[17] The book was very controversial, as it implied extreme anti-welfare politics and, through its “objective” mathematical model, promoted neoliberal ideas of urban planning. 

In another publication, called World Dynamics, Forrester presented “World2”, a system dynamics model of our world which was the basis of all subsequent models predicting a collapse of our socio-technological-natural system by the mid 21st century. Nine months after World Dynamics, a report called Limits to Growth was published, which used the “World3” computer model to simulate the consequences of interactions between the Earth and human systems. Commissioned by the Club of Rome, the findings of the study were first presented at international gatherings in Moscow and Rio de Janeiro in the summer of 1971, and predicted societal collapse by the year 2040. Most importantly, the report put the idea of a finite planet into focus. 

Figure 5 – Jay W. Forrester, World2 model, base for all subsequent predictions of collapse such as Limits to Growth.

The main case study in the seminar was Wright’s 1990 game SimEarth, a life simulation video game in which the player controls the development of a planet. In developing SimEarth, Wright worked with the English scientist James Lovelock, who served as an advisor and whose Gaia hypothesis of planetary evolution was incorporated into the game. Continuing the systems dynamics approach developed for SimCity, SimEarth was an attempt to model a scientifically accurate approximation of the entire Earth system through the application of customised systems dynamics principles. The game modelled multiple interconnected systems and included realistic feedback between land, ocean, atmosphere, and life itself. The game’s user interface even featured a “Gaia Window”, in direct reference to the Gaia theory which states that life plays an intimate role in planetary evolution and the regulation of planetary systems. 

One of the tutorial levels for the SimEarth featured a playable model of Lovelock’s “Daisyworld” hypothesis, which postulates that life itself evolves to regulate its environment, forming a feedback loop and making it more likely for life to thrive. During the development of a life-detecting device for NASA’s Viking lander mission to Mars, Lovelock made a profound observation, that life tends to increase the order of its surroundings, and that studying the atmospheric composition of a planet will provide evidence enough of life’s existence. Daisyworld is a simple planetary model designed to show the long-term effects of coupling and interdependence between life and its environment. In its original form, it was introduced as a defence against criticism that his Gaia theory of the Earth as a self-regulating homeostatic system requires teleological control rather than being an emergent property. The central premise, that living organisms can have major effects on the climate system, is no longer controversial. 

Figure 6 – SimEarth full planetary model.

In SimEarth, the planet itself is alive, and the player is in charge of setting the initial conditions as well as maintaining and guiding the outcomes through the aeons. Once a civilisation emerges, the player can observe the various effects, such as the impacts of changes in atmospheric composition due to fossil fuel burning, or the temporary expansion of ice caps in the aftermath of a major nuclear war. SimEarth’s game box came with a 212-page game manual that was at once a comprehensive tutorial on how to play and an engrossing lesson in Earth sciences: ecology, geology, meteorology and environmental ethics, written in accessible language that anyone could understand.  

Figures 7&8 – Planet Garden simplified model and main game loop.

SimEarth and other serious simulation games in general represent a way that games could serve a function of public education while remaining a form of popular entertainment. This genre also represents an incredible validation of claims that video games can be valuable cultural artifacts. Ian Bogost writes: “This was a radical way of thinking about video games: as non-fictions about complex systems bigger than ourselves. It changed games forever – or it could have, had players and developers not later abandoned modelling systems at all scales in favor of representing embodied, human identities.”[18] 

Lessons that architectural design can learn from these games are many and varied, the most important one being that it is possible to think about big topics by employing models and systems while maintaining an ethos of exploration, play and public engagement. In this sense, one could say that a simulation game format might be a contemporary version of Humboldt’s illustration, with the added benefit of interactivity; but as we have seen, there is a more profound, crucial difference – this format goes beyond just a representation, beyond just a fiction, into worldmaking.  

As a result of this research, the students in the seminar utilised Unreal Engine to create version one (v.1) of Planet Garden, a multi-scalar, interactive, playable model of a self-sustaining, wind and solar-powered robotic garden, set in a desert landscape. The simulation was envisioned as a kind of reverse city builder, where a goal of the game is to terraform a desert landscape by deploying different kinds of energy-producing technologies until the right conditions are met for planting and the production of oxygen. The basic game loop is based on the interaction between the player and four main resources: energy, water, carbon, and oxygen. In the seminar, we also created a comprehensive game manual. The aims of the project were to learn how to model dynamic systems and to explore how game workflows can be used as ways to address urban issues. 

Planet Garden is projected to become a big game for the Getty exhibition; a simulation of a planetary ecosystem as well as a city for 10 billion people. We aim to model various aspects of the planetary city, and the player will be able to operate on multiple spatial sectors and urban scales. The player can explore different ways to influence the development and growth of the city and test many scenarios, but the game will also run on its own, so that the city can exist without direct player input. Our game utilises core design principles that relate to system dynamics, evolution, environmental conditions, and change. A major point is the player’s input and decision-making process, which influence the outcome of the game. The game will also be able to present conditions and consequences of this urban thought experiment, as something is always at stake for the player.  

The core of the simulation-as-a-model idea is that design should have testable consequences. The premise of the project is not to construct a single truthful, total model of an environment but to explore ways of imaging the world through simulation and open new avenues for holistic thinking about interdependence of actors, scales and world systems. If the internet ushered a new age of billions of partial identarian viewpoints, all aggregating into an inchoate world gestalt, is it a time to rediscover a new image of the interconnected world? 

Figure 9 – Planet Garden screenshot, late game state.
Figures 10–16 – Planet Garden v.1.

References

[1] For a longer discussion on this, see O. M. Ungers, City Metaphors, (Cologne: Buchhandlung Walther Konig, 2011). For the central place of analogies in scientific modeling, see M. Hesse, Models and Analogies in Science, and also Douglas Hofstadter, Surfaces and Essences: Analogy as the Fuel and Fire of Thinking (Basic Books, 2013). 

[2] The term “worldmaking” comes from Nelson Goodman’s book Ways of Worldmaking, and is used here to be distinguished from worldbuilding, a more narrow, commercially oriented term. 

[3] For a great introduction to the life and times of Alexander Von Humboldt, see A. Wulf, The Invention of Nature: Alexander von Humboldt’s New World (New York: Alfred A. Knopf, 2015).

[4] Quoted in H. G. Funkhouser, “Historical development of the graphical representation of statistical data”, Osiris 3 (1937), 269–404.

[5] T. J. Demos, Against The Anthropocene (Berlin: Sternberg Press, 2016).

[6] T. J. Demos, Against The Anthropocene (Berlin: Sternberg Press 2016).

[7] Design Earth, Geostories, The Planet After Geoengineering (Barcelona: Actar, 2019 and 2021). 

[8] L. Young, Planet City, (Melbourne: Uro Publications, 2020).

[9] For an extended discussion of the simulation as a format, see D. Jovanovic, “Screen Space, Real Time”, Monumental Wastelands 01, eds. D. Lopez and H. Charbel (2022). 

[10] S. Gualeni, Virtual Worlds as Philosophical Tools, (Palgrave Macmillan, 2015) 

[11] For an extended discussion on this, see Clayton Ashley, The Ideology Hiding in SimCity’s Black Box, https://www.polygon.com/videos/2021/4/1/22352583/simcity-hidden-politics-ideology-urban-dynamics 

[12] W. Wright, Dynamics for Designers, GDC 2003 talk, https://www.youtube.com/watch?v=JBcfiiulw-8.

[13] D. H. Meadows, Thinking in Systems, (White River Junction: Chelsea Green Publishing, 2008). 

[14] Arnaud M., “World2 model, from DYNAMO to R”, Towards Data Science, 2020, https://towardsdatascience.com/world2-model-from-dynamo-to-r-2e44fdbd0975.

[15] Wikipedia, “System Dynamics”, https://en.wikipedia.org/wiki/System_dynamics.

[16] Forrester, Urban Dynamics (Pegasus Communications, 1969).

[17] K. T. Baker, “Model Metropolis”, Logic 6, 2019, https://logicmag.io/play/model-metropolis.

[18] I. Bogost, “Video games Are Better Without Characters”, The Atlantic (2015), https://www.theatlantic.com/technology/archive/2015/03/video-games-are-better-without-characters/387556.

Suggest a Tag for this Article
Figure 10 – A satellite image of the Meeting of Waters in the Amazon region in Brazil. The original image shows the confluence of two rivers that flow together but do not mix. Pixel operations driven by agents change the composition of the landscape.
Figure 10 – A satellite image of the Meeting of Waters in the Amazon region in Brazil. The original image shows the confluence of two rivers that flow together but do not mix. Pixel operations driven by agents change the composition of the landscape.
Situatedness: A Critical Data Visualisation Practice
Critical Practice, Data Feminism, Data Visualisation, Decolonisation, Situatedness
Catherine Griffiths

catgriff@umich.edu
Add to Issue
Read Article: 5497 Words

Data and its visualisation have been an important part of architectural design practice for many years, from data-driven mapping to building information modelling to computational design techniques, and now through the datasets that drive machine-learning tools. In architectural design research, data-driven practices can imbue projects with a sense of scientific rigour and objectivity, grounding design thinking in real-world environmental phenomena.

More recently, “critical data studies” has emerged as an influential interdisciplinary discourse across social sciences and digital humanities that seeks to counter assumptions made about data by invoking important ethical and socio-political questions. These questions are also pertinent for designers who work with data. Data can no longer be used as a raw and agnostic input to a system of analysis or visualisation without considering the socio-technical system through which it came into being. Critical data studies can expand and deepen the practice of working with data, enabling designers to draw on pertinent ideas in the emerging landscape around data ethics. Data visualisation and data-driven design can be situated in more complex creative and critical assemblages. This article draws on several ideas from critical data studies and explores how they could be incorporated into future design and visualisation projects.

Critical Data Studies

The field of critical data studies addresses data’s ethical, social, legal, economic, cultural, epistemological, political and philosophical conditions, and questions the singularly scientific empiricism of data and its infrastructures. By applying methodologies and insights from critical theory, we can move beyond a status quo narrative of data as advancing a technical, objective and positivist approach to knowledge.

Historical data practices have promoted false notions of neutrality and universality in data collection, which has led to unintentional bias being embedded into data sets. This recognition that data is a political space was explored by Lisa Gitelman in “Raw Data” Is an Oxymoron, in which she argues that data does not exist in a raw state, such as a natural resource, but is always undergoing a process of interpretation.[1] The rise of big data is a relatively new phenomenon. Data harvested from extensive and nuanced facets of people’s lives signifies a shift in how we approach the implications for power asymmetries and ethics. This relationship between data and society is tied together through critical data studies.

The field emerged from the work of Kate Crawford and danah boyd, who in 2012 formulated a series of critical provocations given the rise of big data as an imperious phenomenon, highlighting its false mythologies.[2] Rob Kitchen’s work has appraised data and data science infrastructures as a new social and cultural territory.[3] Andrew Iliadis and Federica Russo use the theory of assemblages to capture the multitude of ways that already-composed data structures inflect and interact with society.[4] These authors all seek to situate data in a socio-technical framework from which data cannot be abstracted. For them, data is an assemblage, a cultural text, and a power structure that must be available for interdisciplinary interpretation.

Data Settings and Decolonisation

Today, with the increasing access to large data sets and the notion that data can be extracted from almost any phenomena, data has come to embody a sense of agnosticism. Data is easily abstracted from its original context, ported to somewhere else, and used in a different context. Yanni Loukissas is a researcher of digital media and critical data studies who explores concepts of place and locality as a means of critically working with data. He argues that “data have complex attachments to place, which invisibly structure their form and interpretation”.[5] Data’s meaning is tied to the context from which it came. However, the way many people work with data today, especially in an experimental context, assumes that the origin of a data set does not hold meaning and that data’s meaning does not change when it is removed from its original context.

In fact, Loukissas claims, “all data are local”, and the reconsideration of locality is an important critical data tactic.[6] Asking where data came from, who produced it, when, and why, what instruments were used to collect it, what kind of conditioned audience was it intended for, and how might these invisible attributes inform its composition and interpretation are all questions that reckon with a data set’s origin story. Loukissas proposes “learning to analyse data settings rather than data sets”.[7] The term “data set” evokes a sense of the discrete, fixed, neutral, and complete, whereas the term “data setting” counters these qualities and awakens us to a sense of place, time, and the nuances of context.

From a critical data perspective, we can ask why we strive for the digital and its data to be so place-agnostic, a totalising system of norms that erases the myriad of cultures? The myth of placelessness in data implies that everything can be treated equally by immutable algorithms. Loukissas concludes, “[o]ne reason universalist aspirations for digital media have thrived is that they manifest the assumptions of an encompassing and rarely questioned free market ideology”.[8] We should insist upon data’s locality and multiple and specific origins to resist such an ideology.

“If left unchallenged, digital universalism could become a new kind of colonialism in which practitioners at the ‘periphery’ are made to conform to the expectations of a dominant technological culture.

If digital universalism continues to gain traction, it may yet become a self-fulfilling prophecy by enforcing its own totalising system of norms.”[9]

Loukissas’ incorporation of place and locality into data practices comes from the legacy of postcolonial thinking. Where Western scientific knowledge systems have shunned those of other cultures, postcolonial studies have sought to illustrate how all knowledge systems are rooted in local- and time-based practices and ideologies. For educators and design practitioners grappling with how to engage in the emerging discourse of decolonisation in pedagogy, data practices and design, Loukissas’ insistence on reclaiming provenance and locality in the way we work with abstraction is one way into this work.

Situated Knowledge and Data Feminism

Feminist critiques of science have also invoked notions of place and locality to question the epistemological objectivity of science. The concept of situated knowledge comes from Donna Haraway’s work to envision a feminist science.[10] Haraway is a scholar of Science and Technology Studies and has written about how feminist critiques of masculinity, objectivity and power can be applied to the production of scientific knowledge to show how knowledge is mediated by and historically grounded in social and material conditions. Situated knowledge can reconcile issues of positionality, subjectivity, and their inherently contestable natures to produce a greater claim to objective knowledge, or what Sarah Harding has defined as “strong objectivity”.[11] Concepts of situatedness and strong objectivity are part of feminist standpoint theory. Patricia Hill Collins further proposes that the intersectional marginalised experiences of women and minorities – black women, for example – offer a distinctive point of view and experience of the world that should serve as a source for new knowledge that is more broadly applicable.[12]

How can we take this quality of situatedness from feminist epistemology and apply it to data practices, specifically the visualisation of data? In their book Data Feminism, Catherine D’Ignazio and Lauren Klein define seven principles to apply feminist thinking to data science. For example, principle six asks us to “consider context” when making sense of correlations when working with data.

“Rather than seeing knowledge artifacts, like datasets, as raw input that can be simply fed into a statistical analysis or data visualisation, a feminist approach insists on connecting data back to the context in which they were produced. This context allows us, as data scientists, to better understand any functional limitations of the data and any associated ethical obligations, as well as how the power and privilege that contributed to their making may be obscuring the truth.”[13]

D’Ignazio and Klein argue that “[r]efusing to acknowledge context is a power play to avoid power. It is a way to assert authoritativeness and mastery without being required to address the complexity of what the data actually represent”.[14] Data feminism is an intersectional approach to data science that counters the drive toward optimisation and convergence in favour of addressing the stakes of intersectional power in data.

Design Practice and Critical Data Visualisation

The visualisation of data is another means of interpreting data. Data visualisation is part of the infrastructure of working with data and should also be open to critical methods. Design and visualisation are processes through which data can be treated with false notions of agnosticism and objectivity, or can be approached critically, questioning positionality and context. Even when data practices explore creative, speculative, and aesthetic-forward techniques, this can extend and enrich the data artefacts produced. Therefore, we should critically reflect on the processes and infrastructures through which we design and aestheticise data.

How can we take the concept of situatedness that comes out of critical data studies and deploy it in creative design practice? What representational strategies support thinking through situatedness as a critical data practice? Could we develop a situated data visualisation practice?

The following projects approach these questions using design research, digital humanities and critical computational approaches. They are experiments that demonstrate techniques in thinking critically about data and how that critique can be incorporated into data visualisation. The work also expands upon the visualisation of data toward the visualisation of computational processes and software infrastructure that engineer visualisations. There is also a shift between exploring situatedness as a notion of physical territory toward a notion of socio-political situatedness. The following works all take the form of short films, animations and simulations.

Alluvium

Figure 1 – A situating shot of the Gower Gulch site, to capture both scales of assessment: wide-angle photography shows the geomorphological consequences of flood water on the landscape, whilst macro photography details the granular role of sedimentation.

Cinematic data visualisation is a practice of visually representing data. It incorporates cinematic aesthetics, including an awareness of photography’s traditional aspects of framing, motion and focus, with contemporary virtual cinematography’s techniques of camera-matching and computer-generated graphics. This process intertwines and situates data in a geographic and climatic environment, and retains the data’s relationship with its source of origin and the relevance that holds for its meaning.

As a cinematic data visualisation, Alluvium presents the results of a geological study on the impact of diverted flood waters on a sediment channel in Death Valley, California. The scenes took their starting point from the research of Dr Noah Snyder and Lisa Kammer’s 2008 study.[15] Gower Gulch is a 1941 diversion of a desert wash that uncovers an expedited view of geological changes that would normally have taken thousands of years to unfold but which have evolved at this site in several decades due to the strength of the flash floods and the conditions of the terrain.

Gower Gulch provides a unique opportunity to see how a river responds to an extreme change in water and sediment flow rates, presenting effects that could mimic the impact of climate change on river flooding and discharge. The wash was originally diverted to prevent further flooding and damage to a village downstream; today, it presents us with a microcosm of geological activity. The research paper presents data as historical water flow that can only be measured and perceived retrospectively through the evidence of erosion and sediment deposition at the site.

Figure 2 – A situated visualisation combining physical cinematography and virtual cinematography to show a particle simulation of flood waters. 

Alluvium’s scenes are a hybrid composition of film and digitally produced simulations that use the technique of camera-matching. The work visualises the geomorphological consequences of water beyond human-scale perception. A particle animation was developed using accurate topographic models to simulate water discharge over a significant period. Alluvium compresses this timeframe, providing a sense of a geological scale of time, and places the representation and simulation of data in-situ, in its original environment.

In Alluvium, data is rendered more accessible and palpable through the relationship between the computationally-produced simulation of data and its original provenance. The data’s situatedness takes place through the way it is embedded into the physical landscape, its place of origin, and how it navigates its source’s nuanced textures and spatial composition.

The hybridised cinematic style that is produced can be deconstructed into elements of narrative editing, place, motion, framing, depth of field and other lens-based effects. The juxtaposition of the virtual and the real through a cinematic medium supports a recontextualisation of how data can be visualised and how an audience can interpret that visualisation. In this case, it is about geographic situatedness, retaining the sense of physical and material qualities of place, and the particular nuances of the historical and climatic environment.

Figure 3 – The velocity of the particles is mapped to their colouration, visualising water’s characteristic force, directionality and turbulence. The simulation is matched to a particular site of undercut erosion, so that the particles appear to carve the physical terrain.

Death Valley National Park, situated in the Mojave Desert in the United States, is a place of extreme conditions. It has the highest temperature (57° Celsius) and the lowest altitude (86 metres below sea level) to be recorded in North America. It also receives only 3.8 centimetres of rainfall annually, registering it as North America’s driest place. Despite these extremes, the landscape has an intrinsic relationship with water. The territorial context is expressed through the cinematic whilst also connecting the abstraction of data to its place of origin.

For cinematic data visualisation, these elements are applied to the presentation of data, augmenting it into a more sensual narrative that loops back to its provenance. As a situated practice, cinematic data visualisation foregrounds a relationship with space and place. The connection between data and the context from which it was derived is retained, rather than the data being extracted, abstracted, and agnostically transferred to a different context in which site-specific meaning can be lost. As a situated practice, cinematic data visualisation grapples with ways to foreground relationships between the analysis and representation of data and its environmental and local situation.

LA River Nutrient Visualization

Figure 4 – Reconstruction of the site of study, the Los Angeles River watershed from digital elevation data, combined with nutrient data from river monitoring sites.

Another project in the same series, the LA River Nutrient Visualization, considers how incorporating cinematic qualities into data visualisation can support a sense of positionality and perspective amongst heterogeneous data sets. This can be used to undermine data’s supposed neutrality and promote an awareness of data containing various concerns and stakes of different groups of people. Visualising data’s sense of positionality and perspective is another tactic to produce a sense of situatedness as a critical data visualisation practice. Whilst the water quality data used in this project appeared the same scientifically, it was collected by different groups: locally organised communities versus state institutions. The differences in why the data was collected, and by whom, have a significance, and the project was about incorporating that in the representational strategy of data visualisation.

This visualisation analyses nutrient levels, specifically nitrogen and phosphorus, in the water of the Los Angeles River, which testify to pollution levels and portray the river’s overall health. Analysed spatially and animated over time, the data visualisation aims to provide an overview of the available public data, its geographic, seasonal and annual scope, and its limitations. Three different types of data were used: surface water quality data from state and national environmental organisations, such as the Environmental Protection Agency and the California Water Science Center; local community-organised groups, such as the River Watch programme by Friends of the Los Angeles River and citizen science group Science Land’s E-CLAW project; and national portals for remotely-sensed data of the Earth’s surface, such as the United States Geological Survey.

The water quality data covers a nearly-50-year period from 1966 to 2014, collected from 39 monitoring stations distributed from the river’s source to its mouth, including several tributaries. Analysis showed changes in the river’s health based on health department standards, with areas of significantly higher concentrations of nutrients that consistently exceeded Water Quality Objectives.

Figure 5 – Virtual cameras are post-processed to add lens-based effects such as shallow depth of field and atmospheric lighting and shadows. A low, third-person perspective is used to position the viewer with the data and its urban context.

The water quality data is organised spatially using a digital elevation model (DEM) of the river’s watershed to create a geo-referenced 3D terrain model that can be cross-referenced with any GPS-associated database. A DEM is a way of representing remotely-captured elevation, geophysical, biochemical, and environmental data about the Earth’s surface. The data itself is obtained by various types of cameras and sensors attached to satellites, aeroplanes and drones as they pass over the Earth.

Analysis of the water data showed that the state- and national-organised data sets provided a narrow and inconsistent picture of nutrient levels in the river. Comparatively, the two community-organised data sets offered a broader and more consistent approach to data collection. The meaning that emerged in this comparison of three different data sets, how they were collected, and who collected them ultimately informed the meaning of the project, which was necessary for a critical data visualisation.

Visually, the data was arranged and animated within the 3D terrain model of the river’s watershed and presented as a voxel urban landscape. Narrative scenes were created by animating slow virtual camera pans within the landscape to visualise the data from a more human, low, third-person point of view. These datascapes were post-processed with cinematic effects: simulating a shallow depth of field, ambient “dusk-like” lighting, and shadows. Additionally, the computer-generated scenes were juxtaposed with physical camera shots of the actual water monitoring sites, scenes that were captured by a commercial drone. Unlike Alluvium, the two types of cameras are not digitally matched. The digital scenes locate and frame the viewer within the data landscape, whereas physical photography provides a local geographic reference point to the abstracted data. This also gives the data a sense of scale and invites the audience to consider each data collection site in relation to its local neighbourhood. The representational style of the work overall creates a cinematic tempo and mood, informing a more narrative presentation of abstract numerical data.

Figure 6 – Drone-captured aerial video of each data site creates an in-situ vignette of the site’s local context and puts the data back into communication with its local neighbourhood. This also speaks to the visualisation’s findings that community organisation and citizen science was a more effective means of data collection and should be recognised in the future redevelopment of the LA River.

In this cinematic data visualisation, situatedness is engaged through the particular framing and points of view established in the scenes and through the juxtaposition of cinematography of the actual data sites. Here, place is social; it is about local context and community rather than a solely geographical sense of place. Cinematic aesthetics convey the “data setting” through a local and social epistemic lens, in contrast to the implied frameless and positionless view with which state-organised data is collected, including remotely-sensed data.

All the water data consisted of scientific measurements of nitrogen and phosphorus levels in the river. Numerically, the data is uniform, but the fact that different stakeholders collected it with different motivations and needs affects its interpretation. Furthermore, the fact of whether data has been collected by local communities or state institutions informs its epistemological status concerning agency, motivation, and environmental care practices.

Context is important to the meaning that the data holds, and the visualisation strategy seeks to convey a way to think about social and political equity and asymmetry in data work. The idea of inserting perspective and positionality into data is an important one. It is unusual to think of remotely-sensed data or water quality data as having positionality or a perspective. Many instruments of visualisation present their artefacts as disembodied. Remotely-sensed data is usually presented as a continuous view from everywhere and nowhere simultaneously. However, feminist thinking’s conception of situated knowledge asks us to remember positionality and perspective to counter the sense of framelessness in the traditional tools of data collection and analysis.

Cinema for Robots

Figure 7 – A point cloud model of the site underneath the Colorado Street Bridge in Pasadena, CA, showing a single camera position from the original video capture.

Cinema for Robots was the beginning of an exploration into the system that visualises data, rather than data visualisation itself being the outcome. Cinema For Robots presents a technique to consider how to visualise computational process, instead of presenting data as only a fixed and retrospective artefact. The project critically investigates the technique of photogrammetry, using design to reflexively consider positionality in the production of a point cloud. In this case, the quality of situatedness is created by countering the otherwise frameless point cloud data visualisation with animated recordings of the body’s position behind the camera that produced the data.

Photogrammetry is a technique in which a 3D model is computationally generated from a series of digital photographs of a space (or object). The photographs are taken systematically from many different perspectives and overlapping at the edges, as though mapping all surfaces and angles of the space. From this set of images, an algorithm can compute an accurate model of the space represented in the images, producing a point cloud. In a point cloud, every point has a 3D coordinate that relates to the spatial organisation of the original space. Each point also contains colour data from the photographs, similarly to pixels, so the point cloud also has a photographic resemblance. In this project, the point cloud is a model of a site underneath the Colorado Street Bridge in Pasadena, California. It shows a mixture of overgrown bushes and large engineered arches underneath the bridge.

Figure 8 – A perspective of the bridge looking upwards with two camera positions that animate upwards in sync with the video.

The image set was created from a video recording of the site from which still images were extracted. This image set was used as the input for the photogrammetry algorithm that produced the point cloud of the site. The original video recordings were then inserted back into the point cloud model, and their camera paths were animated to create a reflexive loop between the process of data collection and the data artefact it produced.

With photogrammetry; data, computation, and its representation are all entangled. Similarly to remotely-sensed data sets, the point cloud model expresses a framelessness, a perspective of space that appears to achieve, as Haraway puts it, “the god trick of seeing everything from nowhere”.[16] By reverse-engineering the camera positions and reinserting them into the point cloud of spatial data points, there is a reflexive computational connection between data that appears perspectiveless and the human body that produced it. In the series of animations comprising the project, the focus is on the gap between the capturing of data and the computational process that visualises it. The project also juxtaposes cinematic and computational aesthetics to explore the emerging gaze of new technologies.

Figure 9 – Three camera positions are visible and animated simultaneously to show the different positions of the body capturing the video that was the input data for the point cloud.

The project is presented as a series of animations that embody and mediate a critical reflection on computational process. In one animation, the motion of a hand-held camera creates a particular aesthetic that further accentuates the body behind the camera that created the image data set. It is not a smooth or seamless movement but unsteady and unrefined. This bodily camera movement is then passed on to the point cloud model, rupturing its seamlessness. The technique is a way to reinsert the human body and a notion of positionality into the closed-loop of the computational process. In attempting to visualise the process that produces the outcome, reflexivity allows one to consider other possible outcomes, framings, and positions. The animations experiment with a form of situated computational visualisation.

Automata I + II

Figure 10 – A satellite image of the Meeting of Waters in the Amazon region in Brazil. The original image shows the confluence of two rivers that flow together but do not mix. Pixel operations driven by agents change the composition of the landscape.

This work took the form of a series of simulations that critically explored a “computer vision code library” in an open-ended way. The simulations continued an investigation into computational visualisation rather than data visualisation. The process sought to reverse-engineer machine vision software – an increasingly politically contentious technology – and critically reflect on its internal functionality. Here, source code is situated within a social and political culture rather than a neutral and technical culture. Instead of using a code library instrumentally to perform a task, the approach involves critically reading source code as a cultural text and developing reflexive visualisations that explore its functions critically.

Many tools we use in design and visualisation were developed in the field of computer vision, which engineers how computers see and make sense of the world, including through camera-tracking and the photogrammetry discussed previously. In Automata I, the OpenCV library (an open-source computer vision code library) was used. Computer vision is comprised of many functions layered on top of each other acting as matrices that filter and analyse images in different ways to make them interpretable by algorithms. Well-known filters are “blob-detection” and “background subtraction”. Simply changing a colour image to greyscale is also an important function within computer vision.

Figure 11 – A greyscale filter shows the algorithmic view of the same landscape and computational data.

Layering these filters onto input images helps to understand the difference between how humans see the world and interpret it and how an algorithm is programmed to see the world and interpret it differently. Reading the code makes it possible to understand the pixel logic at play in the production of a filter, in which each pixel in an image computes its values based on the pixel values around it, producing various matrices that filter information in the image. The well-known “cellular automata” algorithm applies a similar logic; a “Langton’s ant” uses a comparable logic.

A series of simulations were created using a satellite image of a site in the Amazon called the Meeting of Waters, which is the confluence of two rivers, the dark-coloured Rio Negro and the sandy-coloured Amazon River. Each river has different speeds, temperatures and sediments, so the two rivers do not merge but flow alongside each other in the same channel, visibly demarcated by their different colours.

The simulations were created by writing a new set of rules, or pixel logics, to compute the image, which had the effect of “repatterning” it. Analogously, this also appeared to “terraform” the river landscape into a new composition. The simulations switch between the image that the algorithm “sees”, including the information it uses to compute and filter the image, and the image that we see as humans, including the cultural, social and environmental information we use to make sense of it. The visualisation tries to explore the notion of machine vision as a “hyperimage”, an image that is made up of different layers of images that each analyse patterns and relationships between pixels.

Automata II is a series of simulations that continue the research of machine vision techniques established in Automata I. This iteration looks further into how matrices and image analysis combine to support surveillance systems operating on video images. By applying similar pixel rule sets to those used in Automata I, the visualisation shows how the algorithm can detect motion in a video, separating figures in the foreground from the background, leading to surveillance.

Figure 12 – Using the OpenCV code library to detect motion, a function in surveillance systems. Using a video of a chameleon, the analysis is based on similar pixel operations to Automata I.

In another visualisation, a video of a chameleon works analogously to explore how the socio-political function of surveillance emerges from the mathematical abstraction of pixel operations. Chameleons are well-known for their ability to camouflage themselves by blending into their environment (and in many cultures are associated with wisdom). Here the algorithm is programmed to print the pixels when it detects movement in the video and remain black when there is no movement. In the visualisation, the chameleon appears to reveal itself to the surveillance of the algorithm through its motion and camouflage itself from the algorithm through its stillness. An aesthetic contrast is created between an ancient animal captured by innovative technology; however, the chameleon resists the algorithm’s logic to separate background from foreground through its simple embodiment of stillness.

Figure 13. The algorithm was reconfigured to only reveal the pixel operations’ understanding of movement. The chameleon disguises or reveals itself to the surveillance algorithm through its motion.

The work explores the coded gaze of a surveillance camera and how machine vision is situated in society, politically and apolitically, in relation to the peculiarly abstract pixel logics that drive it. Here, visualisation is a reverse-engineering of that coded gaze in order to politically situate source code and code libraries for social and cultural interpretation.

Final Thoughts

Applying critical theory to data practices, including data-driven design and data visualisation, provides a way to interrupt the adherence to the neutral-objective narrative. It offers a way to circulate data practices more justly back into the social, political, ethical, economic, legal and philosophical domains from which they have always derived. The visual techniques presented here, and the ideas about what form a critical data visualisation practice could take, were neither developed in tandem nor sequentially, but by weaving in and out of project developments, exhibition presentations, and writing opportunities over time. Thus, they are not offered as seamless examples but as entry points and options for taking a critical approach to working with data in design. The proposition of situatedness as a territorial, social, and political quality that emerges from decolonial and feminist epistemologies is one pathway in this work. The field of critical data studies, whilst still incipient, is developing a rich discourse that is opportune and constructive for designers, although not immediately associated with visual practice. Situatedness as a critical data visualisation practice has the potential to further engage the forms of technological development interesting to designers with the ethical debates and mobilisations in society today.

References

[1] L. Gitelman, “Raw Data” is an Oxymoron (Cambridge, MA: MIT Press, 2013).

[2] d. boyd and K. Crawford, “Critical Questions for Big Data: provocations for a cultural, technological, and scholarly phenomenon”, Information, Communication & Society 15 5 (2012), 662–79.

[3] R. Kitchen, The Data Revolution: big data, open data, data infrastructures & their consequences (Los Angeles, CA: Sage, 2014).

[4] A. Iliadis and F. Russo, “Critical Data Studies: an introduction”, Big Data & Society 3 2 (2016).

[5] Y. A. Loukissas, All Data are Local: thinking critically in a data-driven world (Cambridge, MA: MIT Press, 2019), 3.

[6] Ibid, 23.

[7] Ibid, 2.

[8] Ibid, 10.

[9] Ibid, 10.

[10] D. Haraway, “Situated Knowledges: the science question in feminism and the privilege of partial perspective”, Feminist Studies 14 3 (1988), 575–99.

[11] S. Harding, “‘Strong objectivity’: A response to the new objectivity question”, Synthese 104 (1995), 331–349.

[12] P. H. Collins, Black Feminist Thought: consciousness and the politics of empowerment (London, UK: HarperCollins, 1990).

[13] C. D’Ignazio and L. F. Klein, Data Feminism (Cambridge, MA: MIT Press, 2020),152.

[14] Ibid, 162.

[15] N. P. Snyder and L. L. Kammer, “Dynamic adjustments in channel width in response to a forced diversion: Gower Gulch, Death Valley National Park, California”, Geology 36 2 (2008), 187–190.

[16] D. Haraway, “Situated Knowledges: the science question in feminism and the privilege of partial perspective”, Feminist Studies 14 3 (1988), 575–99.

Suggest a Tag for this Article
[4] Infrastructure for subsurface ecologies. Alex Duff, University of Technology Sydney, Master of Landscape Architecture Thesis, 2021 (supervisor: Dr Andrew Toland).
[4] Infrastructure for subsurface ecologies. Alex Duff, University of Technology Sydney, Master of Landscape Architecture Thesis, 2021 (supervisor: Dr Andrew Toland).
Governing the Ground: Architecture v. the Rights of the Land 
Biological Diversity, Governing, land ownership, land rights, Rights of nature, Sustainable Development
Andrew Toland

andrew.toland@uts.edu.au
Add to Issue
Read Article: 4402 Words

Until recently, nature was wholly outside the law.[1] At most, it was property of one sort or another – to be bought and sold, securitised and commodified, and especially, in the old-fashioned phrase of the English common law, “improved”. Other “laws” – of physics, chemistry and biology – are not of consequence in this realm of capital “L” Law,[2] exempted because of their exceptionalism. Humans are distinct from and superior to other animals, a situation the Canadian environmental lawyer and academic David R. Boyd describes as “at odds with reality … any biologist will tell you that humans are animals”.[3] Black’s Law Dictionary, the dominant legal lexicon in North America, is at pains to point out that the legal definition of animals “includes all living creatures not human”.[4] Similarly, architecture presented itself as standing apart from nature. “Architecture, unlike the other arts, does not find its patterns in nature”, claimed Gottfried Semper in 1834.[5] Or Louis Kahn in 1969: “What man makes, nature cannot make.”[6] In what is ultimately a form of the cosmology of the modern, law and architecture sit apart from and superior to nature. Design, like the economic activities to which law gives its support, is about subduing nature and turning it to productive ends. In this model, both are methods of human governance of the natural world. Indeed, for centuries, architecture was among the key pieces of evidence cited for human exceptionalism – buildings and cities, just as in Laugier’s original parable of the hut as the first example of architecture, allowed humans to transcend the state of nature.[7, 8] At times, this line of Western thought had deeply pernicious consequences for other peoples throughout the world, as the presence or absence of architecture, as well as agricultural cultivation, became one of the key legal determinants that permitted European colonisers to expropriate the lands of indigenous peoples.[9] Architecture was thus enfolded into the law’s methods for imposing governance over unfamiliar lands and peoples, just as it structured the dominance over nature. But what would it mean, for architecture no less than for the law, if – as one of the provocations suggested by the editors of this journal proposes – nature were to govern itself? Developments in legal theory over the past several decades, as well as a handful of legal cases that have received wide media coverage, now allow us to consider this novel possibility. This article considers the rise of this “rights of nature” jurisprudence from the perspective of architecture and landscape architecture, with particular attention paid to the emergence of the (literal) law of “the land”, as well as what this emerging way of thinking about the natural world and its life and systems might mean for the design of the very ground itself. 

Media reporting on high profile lawsuits or settlements where legal standing has been claimed (and in some cases recognised) for landscapes, ecosystems and rivers, to enable them to sue as plaintiffs, has drawn attention to the rights of nature and related claims as strategies to protect ecosystems or seek accountability for environmental damage and destruction. This has involved instances as diverse as the Whanganui River in New Zealand,[10] the Ganges and Yamuna Rivers and the Gangotri and Yamunotri glaciers in India,[11, 12] the Colorado River in the United States,[13] the Amazon rainforest in Colombia,[14] and the Paraná Delta wetlands in Argentina.[15] In addition, by the start of 2021, 178 legal provisions derived from rights of nature legal theory had been documented in seventeen countries across five continents, with an additional thirty-seven under consideration in ten more countries. Rights of nature has also found expression in a range of international legal instruments, such as the United Nations’ 2030 Agenda for Sustainable Development, the Convention on Biological Diversity, and in the jurisprudence of the Inter-American Court of Human Rights.[16] These approaches have their origins in the relatively recent fields of “earth jurisprudence” and “wild law”.[17] Many of their arguments derive from the disjunction that has emerged between the law and advances in the ecological sciences; a critique of legal doctrines trapped in the discrete and mechanistic model of the natural work developed during the scientific revolution of the sixteenth and seventeenth centuries when these foundational areas of the law were also fundamentally consolidated.[18] In contrast, earth jurisprudence and wild law seek to orient the law towards a scientific model of the world as made up of dynamic organic and material interrelationships, and away from anthropocentrism, subordination of the environment in the form of “property”, and economic notions of ever-expanding “growth”.[19] 

Figure 1 – Elements of the subterranean biome. Alex Duff, University of Technology Sydney, Master of Landscape Architecture Thesis, 2021 (supervisor: Dr Andrew Toland). 

Beyond this, the legal presumptions that give rise to the longstanding juridical status of nature also provide the basic conceptual structure within which the basic actions of modernity, including design, occurred. The basic systems of procurement of architecture, landscape architecture, and urban and landscape planning and design all fundamentally depend on the system of property; on who has legal control or dominion over land, and the right to “exploit” its resources (a much more neutral term in legal parlance, but one which, nonetheless, opens the door for acts with much more negative and damaging consequences). Whether issued by individuals, corporations or the state, any design commission granted to an architect or landscape architect requires the commissioner to have the right to “improve” (again, in the sense of the archaic language of the law) the land in the first place. Before embarking on a further consideration of what the rights of nature might mean for design disciplines concerned with built and natural environments, it is worth examining in some detail how the very legal conceptualisation of the ground itself also involved the basic activities of architecture and landscape design. 

From the sixteenth century onwards, in English common law, one of the fundamental precepts governing land (and who had the right to do what; on, under, and above the ground) was encapsulated in the Latin legal dictum, Cuius est solum, eius est usque ad colem et ad infernos: “Whoever’s is the soil, it is theirs all the way to Heaven and all the way to Hell.”[20] The earliest recorded judicial authority for this approach has its origins in a basic architectural dispute. Sometime around 1586, an English landowner somewhere in Oxfordshire constructed a house blocking the light and views his neighbour had enjoyed for some three to four decades. The neighbour sued. The record of the judgment in that lawsuit, Bury v Pope, is a scant 123 words long and can be quoted in full:  

“Case for stopping of his light.-It was agreed by all the justices, that if two men be owners of two parcels of land adjoining, and one of them doth build a house upon his land, and makes windows and lights looking into the other’s lands, and this house and the lights have continued by the space of thirty or forty years, yet the other may upon his own land and soil lawfully erect an house or other thing against the said lights and windows, and the other can have no action ; for it was his folly to build his house so near to the other’s land: and it was adjudged accordingly. 

Nota. Cujus est solum, ejus est summitas usque ad cœlum.”[21]

The final nine words echo down the centuries, certainly in the areas of the world touched by English common law, from mineral rights in Native American lands to mining leases in postcolonial Africa to tricky jurisdictional questions over carbon capture and storage. The careful reader will note that “et ad infernos” (“and to hell/the underworld”) does not appear in the original Latin maxim at the end of the report of the original judgment. And yet by the eighteenth and nineteenth centuries, the common law doctrine, which has variously been claimed to have its origins in Roman or Jewish Law, had come to be accepted as applying to rights both above and below an owner’s land. It is no coincidence that by this time claims and rights related to the extraction of mineral resources were of huge economic importance. In English common law, the parameters of land and land ownership, as originally conceived, emerged as spatially absolute – it could not conceive of more intricate frameworks of interests or custodianship in which different parties or, indeed, different beings might share in the rights and responsibilities for the use and care of a given territory.  

Figure 2 – Surface/subsurface reciprocities. Alex Duff, University of Technology Sydney, Master of Landscape Architecture Thesis, 2021 (supervisor: Dr Andrew Toland). 

A few decades later, this fundamental principal of the law of Land (Terra, as presented in its Latin formulation), was elaborated in telling detail by the great systematiser of early modern jurisprudence, the Elizabethan jurist Sir Edward Coke. Again, it is worth scrutinising how Coke first presented this legal approach to the land; in essence, it depends on a set of presumptions of human habitation within the material environment that we can also see establishing the modern foundations of designing, dwelling and designing the land in which that dwelling occurs (with land that can be built upon being accorded a special privilege): 

“Terra, in the legal signification comprehended any ground, soil, or earth whatsoever; as meadows, pastures, wood, moores, waters, marshes, furses and heath. Terra est Nomen generalissimum, et comprehendit omnes species terra; but properly terra dicitur a terendo, quia vomere teritur; and anciently it was written with a single r; and in that sense it includeth whatsoever may be plowed; and is all one with arvum ab arando. It legally includeth also all castles, houses, and other buildings: for castles, houses, &c. consist upon two things, viz. land or ground, as the foundation or structure therewith, so that in passing the land or ground, the structure or building thereupon passeth therewith. Land is anciently called Fleth; but land builded on is more worthy than other land, because it is for the habitation of man, and in that respect hath the precedency to be demanded in the first place in a Præcipe, as hereafter shall be said.”[22] 

It is habitation that conveys rights; that is the source of law and governance over land and the expropriation of its material resources: 

“And therefore this element of earth is preferred before the other elements: first and principally, because it is for the habitation and resting-place of man; for man cannot rest in any of the other elements, neither in the water, are, or fire. For as the heavens are the habitation of Almightie God, so the earth hath he appointed as the suburbs of heaven to be the habitation of man; Cœlum cœli domino, terram autum dedit filiis hominum. All the whole heavens are the Lord’s, the earth hath he given to the children of men. Besides, every thing, as it serveth more immediately or more meerly for the food and use of man (as it shall be said hereafter), hath the precedent dignity before any other. And this doth the earth, for out of the earth cometh man’s food, and bread that strengthens man’s heart, confirmat cor hominis, and wine that gladdeth the heart of man, and oyle that makes him a cheerful countenance; and therefore terra olim Ops mater dicta est, quia omnia hac opus habent ad vivendum. And the Divine agreeth herewith for he saith, Patrium tibi & nutricem, & matrem, & mensam, & domum posuit rerram Deus sed & sepulchre tibi hanc eandem dedir. Also, the waters that yeeld fish for the food and sustenance of man and are not by that name demandable in a Præcipe.”[23] 

The ownership of control of the surface of the land is then expanded into a fully three-dimensional envelope of property, governance and control: 

“… but the land whereupon the water floweth or standeth is demandable (as for example) viginti acr’ terræ aqua coopert’, and besides, for the earth doth furnish man with many other necessaries for his use, as it is replenished with hidden treasures; namely gold, silver, brasse, iron, tynne, leade, and other metals, and also with a great variety of precious stones, and many other things for profit, ornament, and pleasure. And lastly, the earth hath in law a great extent upwards, not only of water, as hath been said, but of ayre and all other things even up to the heaven; for cujus est solum ejus est usque ad coelum, as it is holden.”[24] 

Although the subsurface is not explicitly mentioned in the Latin dictum, it has always been the presumption that the rights of land extend down as well as upwards, which is made plain by Coke’s express discussion of mining (an increasingly important economic activity in both Elizabethan and Jacobean England) and the expanding global conquests of the European empires. 

Figure 3 – Sydney basin soil sampling. Alex Duff, University of Technology Sydney, Master of Landscape Architecture Thesis, 2021 (supervisor: Dr Andrew Toland). 

Less than a century later, the importance of subsuming any disorderly expressions of nature on landed property – a theory of landscape design that had been developing across the course of the seventeenth century – was famously crystallised in Joseph Addison’s influential essay on the landscape garden, “On the Pleasures of the Imagination”;[25] property and design fused in his dictum: “a Man might make a pretty Landskip of his own Possessions.”[26] Over subsequent centuries, and especially in the context of European colonialism, it became almost an imperative that land be improved by “art” in order to justify its expropriation and its incorporation into a totalising world economic system.[27] As Sir William Blackstone, Coke heir’s as juridical systems builder and the most influential legal systematiser from the end of the eighteenth century onwards, wrote: “The Earth, and all things herein, are the general property of mankind, exclusive of other beings, from the immediate gift of the creator.”[28] 

Blackstone himself was a great architectural enthusiast and, indeed, an architectural critic and draftsperson, author of An Abridgment of Architecture (1743) and Elements of Architecture (1746-7).[29] In classical architecture, Blackstone saw the highest expression of a system of universal laws that surpassed the disorderliness of the natural world. Here, his model was the science of mathematics, not the natural sciences; it was the former that gave architecture access to a plane of being beyond the worldly, the realm of Beauty and Nobility, “the flower and crown of all sciences mathematical”. Classical architecture provided Blackstone with his model for his efforts to renovate and remodel English common law, to rescue it from its fate, “like other venerable edifices of antiquity, which rash and unexperienced workmen have ventured to new-dress and refine, with all the rage of modern improvement … it’s [sic] symmetry … destroyed, it’s proportions distorted, and it’s majestic simplicity exchanged for specious embellishments and fantastic novelties”.[30] Just as the architect must work to restore symmetry, proportion, and majestic simplicity to a grand manor fallen into decay, “mankind [sic]” was duty-bound to elevate “his [sic]” property of the entire earth through the improvements of art and science. Blackstone’s distaste for “modern improvement” did not preclude him from writing elsewhere of the inherited law as “an old Gothic castle” that needed to be “but fitted up for a modern inhabitant … converted into rooms of convenience, … chearful [sic] and commodious”.[31] 

Figure 4 – Infrastructure for subsurface ecologies. Alex Duff, University of Technology Sydney, Master of Landscape Architecture Thesis, 2021 (supervisor: Dr Andrew Toland). 

The totalising thrust of Western property law as a law of land has resulted in more recent efforts by designers focused on the environment and ecology, unlike the theorists of earth jurisprudence and wild law, to find spaces outside of the law itself, rather than to attempt to find space within it. The landscape architect Gilles Clément has deliberately sought out land literally outside the jurisdiction and operations of the law and its various systems of governance and administration. His notion of le tiers paysage is about land:  

“… forgotten by the cartographer, neglected by politics, undefined spaces, devoid of function that are difficult to name; an ensemble … located on the margins. On the edge of the woods, along the roads and rivers, in the forgotten corners of the culture, in the places where machines do not go. It covers areas of modest size, scattered like the lost corners of a field; unitary and vast like peat bogs, moors and wastelands resulting from recent abandonment. 

There is no similarity of form between these fragments of landscape. They have only one thing in common: they all provide a refuge for diversity. Everywhere else, diversity is driven out. 

This justifies bringing them together under a single term. I propose ‘Third Landscape’ …”[32] 

The passage is striking, especially when we compare it to Coke, whose aim was to bring those very landscapes – “meadows, pastures, wood, moores, water, marshes, furses and heath” – within the remit of the law. For Clément, it is the very fact that the latter types of landscape, especially, have been so difficult to govern, to bring within law’s jurisdictional ambit, that makes them such rich sources of biodiversity – nature’s outlaw territories. It is these territories that ought to provide a model for designers (and his preferred model for the designer in question is not the architect or landscape architect, but the gardener, who “creates a landscape by following it over time, using horticultural and environmental maintenance techniques. … But above all, it is about life”).[33] 

Figure 5 – Infrastructure for subterranean biodiversity. Alex Duff, University of Technology Sydney, Master of Landscape Architecture Thesis, 2021 (supervisor: Dr Andrew Toland). 

But if nature itself has rights, if it is recognised as having agency and self-determination in the manner put forward by the earth jurisprudence and wild law movements, then designers may not need to – and, increasingly, cannot – escape into a third landscape. As other theorists have pointed out, nature is always part of the social. Beyond the well-known position of Bruno Latour in We Have Never Been Modern, other theorists have noted the ways in which “the entities that compose arrangements have a physiochemical composition and are, accordingly, part of the greater physiochemical stratum in which material entities are linked”.[34] In other words, society and culture have a “physicality”, and a large part of that physicality is defined by the bio- and physiochemical processes of “nature”. In this sense, even anthropogenic climate change is a kind of revenge of nature, whose processes have turned against us. In a more everyday sense, “The properties of wood, for instance, lay down sequences of actions that must be followed if trees are to be felled, axe handles produced, animals clubbed, houses built, and paper produced”.[35] 

There is no escaping our material realities and the dynamics they define. The question is how to enter into and think of ways to reconfigure those “sequences of actions” – in other words, how to design. Material properties are not absolutely deterministic. It is not just a matter of asking the brick, à la Louis Kahn.[36] Instead, the design possibilities that come from the rights of nature simply begin to open up the field for a set of political claims about the appropriate status and interrelationship between humans, societies and the non-human environment, by codifying those claims in a form that other models of organising human activities are forced to recognise. As in debates over the political, social, economic and cultural rights of humans, the language of rights is simply part of an ongoing political contestation over claims and obligations.[37] We might begin, for example, by using the very same premises as Coke, considering what design might mean in the realm of terra itself – “ground, soil, or earth whatsoever” – if that very ground also had self-determining rights, and could govern itself, irrespective of what our “designs” upon it might be. A recent piece in Nature Climate Change draws attention to the extent to which subterranean ecosystems have generally been overlooked in biodiversity and climate change mitigation agendas.[38] This zone, “likely the most widespread non-marine environment on Earth,” remains largely a terra incognita. In cities, the upper layers of the urban soil (the “A and B horizons”) are highly “disturbed” and often “depaupurated”, if not directly contaminated with anthropogenic chemicals and other wastes.[39] Various projects have drawn attention to the task of recovering urban and other post-anthropogenic soils.[40] But an equally important shift may simply be in opening up the legal definition of “land” and the cluster of rights and obligations that have been constructed around it. Instead of a conceptual tabula rasa simply to be built upon, if we instead came to recognise it as the lively subterranean biome it in fact is, and if that biome might be recognised as having rights and claims of its own, then design might be forced to take a very different turn. Even the most vacant of plots will come to seem not so vacant, after all. 

References 

[1] Admittedly, this assertion is phrased in a universalist register. The reality is that what is being referred to is Western, and, latterly, international, legal constructs, that have provided the dominant model for legal thinking across almost all jurisdictions that form the basis for land law in the early twenty-first century. 

[2] C. Kauffman and P. Martin, The Politics of Rights of Nature: Strategies for Building a More Sustainable Future (Cambridge, MA: The MIT Press, 2021), 4. 

[3] D. Boyd, The Rights of Nature: A Legal Revolution That Could Save the World, (Toronto: ECW Press, 2017), xxv. 

[4] Ibid, xxv. 

[5] Quoted in A. Forty, Words and Buildings: A Vocabulary of Modern Architecture (London: Thames & Hudson, 2000), 220. 

[6] Ibid, 220. 

[7] O. Verkaaik, “Creativity and Controversy in a New Anthropology of Buildings”, Ethnography 17(1) (2015), 135–143. Recent work in anthropology has explicitly challenged this premise, as in the work of Tim Ingold discussed by Verkaaik: T. Ingold, “Building, Dwelling, Living: How Animals and People Make Themselves at Home in the World”, 172–188. In Tim Ingold, ed., The Perception of the Environment: Essays on Livelihood, Dwelling and Skill (London: Routledge, 2000). 

[8] M. Laugier, An Essay on Architecture, trans. Wolfgang Herrmann and Anni Herrmann (Los Angeles: Hennessey & Ingalls, 1977). 

[9] S. Banner, “Why Terra Nullius? Anthropology and Property Law in Early Australia”, Law and History Review, 23(1) (2005), 95–132 at 107. 

[10] Te Awa Tupua (Whanganui River Claims Settlement) Act 2017 (NZ). 

[11] Mohd Salim v State of Uttarakhand & others, WPPIL 126/2014 (High Court of Uttarakhand), 2017. 

[12] Lalit Miglani v State of Uttarakhand & others, WPPIL 140/2015 (High Court of Uttarakhand), 2017. 

[13] Colorado River Ecosystem v State of Colorado, 1:17-cv-02316 (U.S. Colorado Federal Court), 2017. 

[14] Demanda Generaciones Futuras v Minambiente, STC4360-2018 (Supreme Court of Colombia), 2018. 

[15] Asociación Civil por la Justicia Ambiental v. Province of Entre Ríos, et al., (Supreme Court of Argentina), 2020. 

[16] C. Kauffman and P. Martin, The Politics of Rights of Nature: Strategies for Building a More Sustainable Future (Cambridge, MA: The MIT Press, 2021), 2. 

[17] As represented, especially, in the work of T. Berry, “Rights of Earth: We Need a New Legal Framework Which Recognises the Rights of All Living Beings,” 227–229. P. Burdon, ed., Exploring Wild Law: The Philosophy of Earth Jurisprudence (Kent Town, South Australia: Wakefield Press, 2011); C. Cullinan, Wild Law: A Manifesto for Earth Justice, 2nd ed. (Totnes, UK: Green Press, 2011); and P. Burdon, Earth Jurisprudence: Private Property and the Environment (London: Routledge, 2014). 

[18] C. Kauffman and P. Martin, The Politics of Rights of Nature: Strategies for Building a More Sustainable Future (Cambridge, MA: The MIT Press, 2021), 4–5. 

[19] D. Boyd, The Rights of Nature: A Legal Revolution That Could Save the World, (Toronto: ECW Press, 2017), xxii–xxiii. 

[20] Jackson Municipal Airport Authority v. Evans, 191 So. 2d 126, 128 (Miss. 1966). 

[21] Bury v Pope (1586) Cro Eliz 118; 78 ER 375. 

[22] Coke on Littleton (1628–1644), 4a. 

[23] Ibid. 

[24] Ibid. 

[25] J. Addison, Spectator, III, Nos 411–421 (21 June–3 July 1712), 535. 

[26] Ibid. 

[27] For example, the first landscape designer in Australia, Thomas Shepherd, advocated for the use of English “landscape gardening” principles to be used to improve Crown land in order to attract foreign capital investment: see T. Shepherd, Lectures on Landscape Gardening in Australia (Sydney: William M’Garvie, 1836). 

[28] W. Blackstone, Commentaries on the Laws of England in Four Books, Book III (Philadelphia: J.B. Lippincott Company, 1893; orig pub 1765), 2. 

[29] C. Matthews, “Architecture and Polite Culture in Eighteenth-Century England: Blackstone’s Architectural Manuscripts” (unpublished dissertation, School of History and Politics, University of Adelaide, 2007); W. Prest, “Blackstone as Architect: Constructing the Commentaries,” Yale Journal of Law & the Humanities, 15(1) (2003), 103–133. 

[30] W. Blackstone, Commentaries on the Laws of England in Four Books, Book I (Philadelphia: J.B. Lippincott Company, 1893; orig pub 1765), 8. 

[31] Ibid, Book III, 268. 

[32] G. Clément, Manifeste du tiers paysage (Paris: Éditions du commun, 2016), 14. 

[33] G. Clément, Gardens, Landscape and Nature’s Genius, trans Elzélina Van Melle (Risskov, Denmark: IKAROS Press, 2020), 19–20. 

[34] T. Schatzki, “Nature and Technology in History,” History and Theory 42(4) (2003), 88–89. 

[35] Ibid, 89. 

[36] Quoted in S. Turkle, Simulation and its Discontents (Cambridge, MA: The MIT Press, 2009), 86 n 4. 

[37] Marie-Bénédicte Dembour, “Human Rights Talk and Anthropological Ambivalence: The Particular Contexts of Universal Claims,” 17–32. Olivia Harris, ed., Inside and Outside the Law: Anthropological Studies of Authority and Ambiguity (London: Routledge, 1996). 

[38] D. Sánchez-Fernández, D. Galassi, J. Wynne, P. Cardoso and S. Mammola, “Don’t Forget Subterranean Ecosystems in Climate Change Agendas,” Nature Climate Change 11 (2021), 458–459. 

[39] R. Forman, Urban Ecology: Science of Cities (Cambridge, UK: Cambridge University Press, 2014), 91–93. 

[40] See, for example, the projects of the landscape architect Julie Bargmann and her D.I.R.T. studio. 

Suggest a Tag for this Article
Figure 1 – Landscapes of Exploitation, Kibali gold mines, Democratic Republic of the Congo.
Figure 1 – Landscapes of Exploitation, Kibali gold mines, Democratic Republic of the Congo.
MIGRATING LANDSCAPES 
ALGORITHMIC VISION, MEDIA ECOLOGIES, MIGRATING LANDSCAPES, REPRESENTATION, TOKENISATION
Tanya Mangion, Michiel Helbig, Corneel Cannaerts

tanyamangion95@gmail.com
Add to Issue
Read Article: 3109 Words

MEDIA ECOLOGIES 

Our collective consciousness of climate change is an accomplishment of the vast apparatus of computational technologies for capturing, processing and visualising increasing amounts of data produced by earth observation technologies, satellite imaging, and remote sensing. These technologies establish novel ways of sensing and understanding our world, extending human visual cultures in scale, time and spectral capacities. The gathered data is synthesised in increasingly complex models and simulations that afford highly dynamic visualisations of climate events as they unfold and envision near future scenarios. The images resulting from this technical vision and cognition render the artificial abstraction comprehensible and are essential in developing the notion of climate change and attempts to mitigate its effects.[1]  

The artificial abstraction introduced through this planetary apparatus is reflected in the naming of the Anthropocene, as the contemporary geological epoch, prompted by humanity’s lasting impact on our planet.[2] The naming has been criticised for its anthropocentrism, i.e. putting the human once again at the centre, and for depoliticising and de-territorialising climate change, casting the whole of humanity as equally responsible for environmental crises, disregarding substantial regional and societal differences. Several alternatives have been formulated in critique of the term: Capitalocene,[3] highlighting the devastating role of capitalism in climate change, or Plantationocene,[4] stressing the ongoing inequalities resulting from colonialism and slave labour. While acknowledging these terms, Donna Haraway proposes the term Chthulucene, introducing multispecies stories and practices, mythologies, and tentacular narratives to avoid anthropocentrism and reductionism, providing room for more than human agency.[5] 

The framing of climate crises within human-centred, depoliticised, technocratic discourse is also strongly critiqued from cultural practices in the arts, design and media.[6] The top-down, analytical point of view afforded through scientific observation, visualisation and prediction is increasingly being complemented by documentary, eyewitness and on-the-ground reports of the impact of climate change. Images captured through the plethora of cell phone and other cameras, data logging, image sharing and social media produce a constantly updating stream of images and data on climate change. Digital media ecologies, the assemblages of hardware, software and content of digital media within our environment, play an important role in addressing climate change.[7] Whether it is through the repurposing of the scientific apparatus and technologies for observation and visualisation or the ubiquitous use of personal devices and social media, computational images have become significant cultural media artefacts that can be used to develop more narrative and fictional imaginaries of environmental crises. 

Landscapes are defined as both natural and human-made environments, as well as their depiction in media such as painting, photography and film. Even as environments, landscapes are a physical and multi-sensory medium in which cultural meanings and values are encoded. Landscapes operate through the visual; i.e. a landscape is what can be seen from a certain vantage point, and implies an active spectator. As a verb, landscaping indicates acting on the environment, through manipulating its material features, erasing or adding elements. Both as environment and as media, landscapes are inextricably entangled with capital and power, whether exploited through extracting resources, consumed as an experience through tourism and real estate, or mediated and commodified as an artefact. In Landscape and Power, Mitchell indicates a landscape as a medium; an area of land is only considered a landscape from the moment one perceives it to be as such, through attached meanings, as artificial-cultural, political and social constructs.[8] The recent climate crises and the emergence of digital media ecologies require us to rethink this implicit human-centred notion of landscape and extend it to include non-human, animal and machine agencies.[9] As such, landscapes are an interesting lens through which to look at the blurring between the natural and the cultural, human and non-human agency, and the mediated and bodily experiences of environments.  

Figure 1 – Landscapes of Exploitation, Kibali gold mines, Democratic Republic of the Congo. 

MIGRATING LANDSCAPES 

The dissertation project “Migrating Landscapes” by Tanya Mangion is framed within the ideas outlined above, it explores landscapes as both environment and media, inextricably entangled with capital and power.[10] The project speculates on landscapes gaining agency through a decentralised autonomous organisation (DAO),[11] that can interact on behalf of the landscape with human agencies – individuals, governments, legal entities, financial systems… Once established, the DAO operates on the blockchain and can operate without human interference as regulated through smart contracts. Governance of the DAO is regulated through tokens, which fractionalise stewardship, but cannot act against the interest of the landscape as encoded by the DAO. 

This speculative scenario questions what role architecture could play when engaged by a DAO that represents the interests of exploited landscapes. How do architects design for this non-human agency? What strategies could architects develop to engage landscapes beyond the habitual ways of looking at them as resources to be excavated, sites to be developed? What novel languages, tools and protocols would architects need to develop in order to take up this role? Rather than attempting to find definite answers to these questions, they instead form the drivers for developing a speculative design project.  

The architectural toolbox seems ill-equipped to deal with the large timeframes and scales that migrating landscapes operate on. In order to begin to address these questions we might extend the architectural toolbox with technologies such as earth observation, satellite imagery, data mining, sensor arrays… The role of the architect could be to repurpose the high-tech apparatus and data from scientific observations of climate change, and turn them into speculative design narratives and imaginaries on migrating landscapes. Using media ecology and algorithmic vision the project highlights issues and landscapes that deserve attention, and launches a call to architects who wish to engage with it. Data collection from available data sets including time-based, satellite, terrain and eyewitness data could be used to rebuild a cohesive image of exploited landscapes, using narrative media combined with conventional architectural processes. Injecting the image of the landscapes back into media ecology would generate a feedback loop that would go on to bring about changes in human behaviour in regard to the landscape both as media and environment, the latter occurring over a longer time frame. 

The speculative design project explores this potential through different aspects: starting with the use of algorithmic vision to analyse landscapes, then giving an overview of the various phases of the development of a DAO, exploring a tokenisation shift from a fungible to a non-fungible valuation of landscapes, representation of landscapes in media ecology and demonstrating how architecture could be used to engage an audience. 

ALGORITHMIC VISION 

Computational visual tools allow architects novel ways of understanding, mapping and visualising landscapes. The combination of multiple data sets provides a more densely mediated version of a landscape. Satellites can pick up the image of a landscape and, when combined with terrain data, mapping platforms provide a data-rich and layered representation of the landscape. While mapping services, like Google maps or GIS, are presented as neutral media, they are entangled with commercial, military and political interests,[12] not only in the technologies used for capturing data but also in its visualisation – as is demonstrated by the absence of data for certain territories, differences in resolution, or the deliberate blurring of specific sites.[13] 

Satellite imagery is not limited to capturing bands of the spectrum visible to human eyes; by combining several bands they can provide insights into vegetation, elevation, refraction, moisture, temperature… The resulting multi-band images can be considered synthetic artificial artefacts as they are assembled by algorithms. They remain largely invisible to humans, and are reduced to mediating information and data flows, as they “do not represent an object, but rather are part of an operation”.[14] Depending on the capturing sensor, information is sampled at discrete intervals, introducing resolution ranging from a hundred metres to fifteen centimetres. Depending on the number of satellites and their operation, the images have a certain refresh rate, giving us the ability to visit time progressions within the landscapes. These freeze-framed images of landscapes provide us with information or proof of interventions that occurred within the territory over time.[15] 

Figure 2 – Satellite bands from Sentinel Application Platform (SNAP), B8, infrared, natural colour. 

The landscapes in the project were the result of human-centric actions like resource extraction, as demonstrated at one of the largest gold mines in the Democratic Republic of the Congo. In addition to satellite images, a virtual field trip of sorts allowed a journey through the data-sphere of the landscapes concerned. This led to extraction performed on different levels; data extraction from photo-sharing platforms was used to investigate the image of the landscapes within the limitations of its geolocation. Another data extraction was performed to explore the fungible asset within the landscape, resulting in a plethora of data, exploring the appropriation of the asset within our culture. Through a process of data scraping, deduction and fragmentation, a series of reconstructions of landscapes were produced during the project. These reconstructed landscapes link material flows from extraction to consumption – of, for instance, gold – and are published again through social media in an attempt to reveal the material sources of familiar consumer objects.[16] Gold was a remarkable mineral to start off with due to its use as a federal resource, keeping economies stable by functioning as a hedge against inflation, as well as its significance in history and popular culture.[17] 

Figure 3 – Zoomable map of the Kibali gold mines, Democratic Republic of the Congo (press space to change layers).

TOKENISATION 

When excavating landscapes for minerals, they are valued for their interchangeable or fungible material properties, for example the amount of gold they contain. Once extracted, each gram of gold is valued the same, regardless of where on the planet it has been mined. Whereas if one goes for a hike, for instance, or looks at landscape painting or photography, specific features of the landscapes, slopes or mountain peaks provide unique experiences; i.e., they are not interchangeable, they are non-fungible. In both these scenarios, the fungible exploitation of landscapes for resource extraction and the non-fungible experience of landscapes, mediated or otherwise, the landscape is passive and does not have agency. 

Figure 4 – Tokenisation of the landscape though mesh triangulation. 

The project proposes tokenisation of the non-fungible aspects of the landscape, controlled by a DAO, allowing collective stewardship of the landscape. This is to be achieved through appropriating tools from earth observation to build a mesh representation of the landscape. Each triangle of the mesh represents a unique, non-fungible fractional token of the landscape – in contrast to a voxel representation, which could be seen as representing the fungible exploitation of the landscape. This data allows an understanding, on a large scale, of fluxes within the landscape, and detects changes unseen to the human eye. Additionally, this data also offers the possibility to autonomise landscapes as DAO systems and thereby give them agency. The DAO operates transparently & independently of human intervention, including that of its creators. Based on a collection of smart contracts running on blockchain technology, it has the ability to garner capital, with automation at its centre and humans at the edges to manage, protect and promote its agency.[18] 

Figure 5  – Voxelisation and triangulation representing fungible and non-fungible discretisation of the landscape.
Figure 5 – Voxelisation and triangulation representing fungible and non-fungible discretisation of the landscape.

REPRESENTATION 

There is a role for architects here, to become engaged to map and visualise the DAO’s non-fungible entities. The architect has the tools to change the representation of landscapes, raising awareness of environmental evolution, generating behavioural changes and, over a longer timescale, impacting the environment itself. However, representation alone is not enough to communicate the sheer scale of these landscapes; the project proposes to map the exploited landscapes on the scale of urban environments, and build interventions in the form of pavilions to raise awareness of the landscapes. This serves to communicate the scale of material displacement of exploited landscapes such as mines within urban environments; commonly being the final destination for material flows, creating conversation and the possibility of engagement between the DAO and the human, with the latter generally being distanced from the reality of material displacement. This act brings the idea of tokenised landscapes to large audiences and allows for human engagement and participation within the DAO as shareholders.  

Figure 6 – 1:1 Visual representation of a physical intervention of part of the Kibali Gold mines within the urban environment of Ghent, Belgium. 

The role of the architect engaged by the DAO is to map and visualise the landscape’s assets, fractionalising it using algorithmic visual tools, and using architectural representations that can be minted as non-fungible tokens. The presence of these tokens on social media and through interventions within physical public spaces in cities aims, in the short term, to raise awareness of the vast scale of these landscapes of exploitation, and to change behaviours and allow for engagement and participation within the DAO as token holders. In the long term, this will start to affect the physical conditions of these landscapes themselves, as they no longer rely on selling their fungible, non-renewable material assets. This could lead to rewilding and restoring of vegetation – and potentially to their being traded as carbon sinks.[19] 

Although token holders should be preserving the non-fungibility of the landscapes, returning to the argument that nature is ultimately defeated by its utility, the next step would be to remove the human from the system completely, merging the biosphere and technosphere. There is still a chance of a “51% attack”; meaning shareholders could agree to overturn an agreement within the smart contract. To prevent this, the system could opt for full autonomy, which it could achieve over a longer timescale. Garnering capital through non-fungible tokens – of its image – could also be a possibility, and would potentially affect and accelerate the timescale of the process.  

Figure 7  – Leveraging social media to share images of the tokenised landscape
Figure 7 – Leveraging social media to share images of the tokenised landscape.

DISCUSSION  

Migrating Landscapes can be viewed as a concept that traces material flows through the use of algorithmic technologies not typically used within architecture, to explore how landscapes, non-human agents, can become autonomous. In the case of this dissertation project, the framework of a DAO was used to transform landscapes as media into non-fungible tokens, allowing the landscapes to stop being exploited and gain agency. What other technologies or tools could architects use to create compelling visual narratives, to engage with audiences and enable autonomy to non-human agents? Within the context of media ecology and algorithmic vision this was one response; considering the plethora of devices and data-gathering techniques that already exist and are still being created, the likelihood of autonomy for non-humans is ever more likely. 

The project does not propose a techno-solutionist approach, where we can engineer ourselves out of wicked problems caused by climate change. Rather, it proposes to use these technologies for their compelling visual, imaginary and narrative qualities, to make migrating landscapes and their non-human agency more relatable. The DAO as a system ultimately acts as a driving force for landscapes to “migrate”, becoming new entities and modifying our relationships and attitudes towards them. The system is allowing for these otherwise unseen landscapes to both establish presence within our media ecologies and to become located within our consciousness in this contemporary age. The changes it would instil are yet to be discovered. 

Acknowledgement

This paper reflects on the dissertation project “Migrating Landscapes” by Tanya Mangion that was developed in response to the studio brief “Algorithmic Vision: Architecture and Media Ecologies” of Fieldstation Studio at KU Leuven Faculty of Architecture. The project speculates on landscapes gaining agency through a decentralised autonomous organisation that can interact on behalf of the landscape with human agencies. Through reappropriating technologies for algorithmic vision, landscapes could turn their unique features into non-fungible tokens, allowing them to stop being exploited and gain agency.

Fieldstationstudio.org | https://www.instagram.com/migrating.landscapes/ 

References 

[1] B. Bratton, The Terraforming (Moscow: Strelka, 2019), 19.

[2] P. Crutzen and E. Stoermer, “The ‘Anthropocene’”, Global Change Newsletter, International Geosphere-Biosphere Program Newsletter, no. 41 (May 2000), 17–18; Crutzen, “Geology of Mankind”, Nature 415 (2002), 23; J. Zalasiewicz et al., “Are We Now Living in the Anthropocene?” GSA (Geophysical Society of America) Today vol. 18, no. 2 (2008), 4–8. 

[3] The origin of this term is not entirely clear, but is discussed at length here: https://www.e-flux.com/journal/75/67125/tentacular-thinking-anthropocene-capitalocene-chthulucene.

[4] J. Davis, A. Moulton, L. Van Sant, B. Williams, “Anthropocene, Capitalocene, … Plantationocene?: A Manifesto for Ecological Justice in an Age of Global Crises” Geography Compass, Volume 13, Issue 5, 2019). 

[5] D. Haraway, “Tentacular Thinking: Anthropocene, Capitalocene, Chthulucene”, Eflux Journal, Issue 75, September 2016. 

[6] T. J. Demos, Against the Anthropocene: Visual Culture and Environment Today (MIT Press, 2017). 

[7] S. Taffel, Digital Media Ecologies: Entanglements of Content, Code and Hardware (Bloomsbury Academic, 2019). 

[8] W. T J. Mitchell, Landscapes and Power (Chicago: University of Chicago Press, 1994), 15. 

[9]  L. Young, Machine Landscapes: Architectures of the Post Anthropocene (London: Wiley). 

[10] See http://www.fieldstationstudio.org/STUDIO/ALGORITHMIC_VISION.

[11] The notion and implementation of a DAO was published by Christoph Jentzsch in the DAO white paper in 2016, see https://blog.slock.it/the-history-of-the-dao-and-lessons-learned-d06740f8cfa5.

[12] These dimensions were discussed during the Vertical Atlas – world.orbit at the Nieuw Instituut Rotterdam in 2020, see https://verticalatlas.hetnieuweinstituut.nl/en/activities/vertical-atlas-worldorbit.

[13] “Resolution Frontier” by Besler and Sons, 2018 see  https://www.beslerandsons.com/projects/resolution-frontier.

[14] E. Thomas, H. Farocki, Working on the Sightlines (Amsterdam: Amsterdam University Press, 2004). 

[15] A toolkit for satellite imagery has been compiled by Andrei Bocin Dumitriu, for the Vertical Atlas – world.orbit project, see https://brainmill.wixsite.com/worldorbit.

[16]  K. Davies, L. Young, Never Never Lands: Unknown Fields (London: AA publishing, 2016).

[17] In Extraction Models and along with Weronika Gajda the exploration of gold as a resource was explored further within the context of New York City’s federal reserve, see  https://www.instagram.com/extraction.models.

[18] This idea was developed by terra0 in: P. Seidler, P. Kolling, M. Hampshire, “Can an augmented forest own and utilise itself?”, white paper, Berlin University of the Arts, Germany, May 2016, https://terra0.org.

[19] There are several projects that propose NFTs as carbon sinks, see https://carbonsink-nfts.com/ and https://nftree.org.

Suggest a Tag for this Article
Fig. 1. The Geoscope within the Museum of the Future’s Observatory, Certain Measures, 2022.
Fig. 1. The Geoscope within the Museum of the Future’s Observatory, Certain Measures, 2022.
World Pictures and Room-Worlds
AI Diaries, Control Rooms, Fictions, Room Worlds
Andrew Witt

awitt@gsd.harvard.edu
Add to Issue
Read Article: 3823 Words

On December 24, 1968, the three-person crew of lunar spacecraft Apollo 8 became the first humans to witness a shimmering Earth ascend over the barren surface of the moon with their own eyes. The photographs that they took of that “Earthrise” electrified humanity, activating a sense of collective destiny not only between human nations but with Earth itself.[1] This vivid new “world picture” was both more total and more visceral than earlier terrestrial abstractions like globes, atlases or maps. Earthrise was an eidetic portrait of a living, breathing world, an amalgam of the geologic, climatic and biologic, taken from outside the world itself. Historian Benjamin Lazier characterised this meta-Copernican moment as inaugurating an entire “Earthrise era”, a time when the image of a whole and delicate Earth could “organize a myriad of political, moral, scientific, and commercial imaginations”.[2]

In many ways, Apollo’s Earth image was a quintessential product of the space age. Not only did its achievement rely on modern space flight, it played out against the backdrop of global conflicts like the Cold War that exploited space as a proxy battleground. Of course, the space age coincided with the information age, and these two cultural tendencies arguably offered divergent ways to picture the world. If the Apollo photos captured a single static vision of a unified Earth, the information age countertendency was to federate disparate fragments of text, diagrams, images, and video into information-rich dynamic media experiences. Experimental media environments brought visitors inside a closed world of light and image projections, immersing the visitor in choreographed flows of electronic stimuli. The constructed worlds presented within such media environments might resemble, reflect, or subvert the world outside them. Projects like filmmaker Stan VanDerBeek’s Movie Drome or architect Ken Isaacs’ Knowledge Box constructed total media spaces with the visitor at the centre, ensconced in walls saturated by film and slide projections.[3] They effectively constructed mediated worlds within the confines of a single room. Even earlier forays into the mediatic experience of information – notably the Eames Office’s Ovoid Theatre at New York’s 1964 World’s Fair – hinted that the information age would be experienced through choreographed matrices of endless and heterogenous image streams. The spatial array of multiple images induced a relational ordering and systemic framework among them. In these media environments, the world picture was not a single image but an overlapping and federated mosaic, a reality implied through juxtaposition and assembled in the technically-calibrated space of the room-world.

Figure 2 – The Earthrise photograph, taken by Bill Anders on December 1968, from Apollo 8. Image courtesy NASA.

To the extent that they conveyed not the static image of a world picture but rather the dynamic behaviour of a world system, information-age media spaces resembled behavioural models. In his influential lecture “World Pictures and World Models”, German philosopher Hans Blumenberg drew the distinction between world pictures and world models as the “difference between the total notion of nature on the one hand and the purpose assigned to the totality of understanding nature on the other”.[4] By “world picture”, Blumenberg does not exactly intend an Earthrise-like image but rather “that embodiment of reality through which and in which humans recognise themselves, orient their judgements and the goals of their actions, measure their possibilities and necessities, and devise their essential needs”.[5] The world picture thus becomes a metaphysical anchor and compass for the human species in relation to species and nature as a whole. The world model, then, is the end toward which the world might be oriented and perhaps the mechanism that effects its transformation.

This paper considers how the world picture, world model, and room-world interact and resonate in our own time, and how they are transcribed into architectural space. We explore these resonances through a specific project of our office, Certain Measures: The Observatory, an immersive environmental installation housed within Dubai’s new Museum of the Future that imagines a fictional centre for global bioremediation in the year 2071. By situating this project in a wider historical constellation of room-worlds and world pictures, Earth-scale architecture extends its purview to contemporary notions of bioengineering, data visualisation, and artificial intelligence. Moreover, in contrast to canonical room-worlds of the past, the Observatory presents its world picture as a fictional reflection on a possible Earth, rather than as a true image of our world today. In doing so, it orchestrates several overlapping and interlocking layers of worldbuilding: fictional species, fictional media content, and even the fictional bureaucracy in which the Observatory is housed. It diverts the nominally factual media of data visualisation and scientific modelling toward projective worldbuilding. The Observatory thus illustrates the role architects and designers can play as worldbuilders across media, including image, data, narrative, and space.

Room Worlds and Control Rooms

Built to transform the very perception of the future as we know it, Dubai’s new Museum of the Future houses a series of immersive environments that position visitors in an empowering version of tomorrow. The Observatory is one such environment, a fictional centre for planetary ecology staged as a physical and media experience. It is presented as an amalgam of control room, panorama, and incubator for newly designed species, developed to confront the challenges of the climate crisis in a future fiction. It is the culmination of the floor-wide exhibit introducing “the HEAL Institute”, a fictional NGO tasked with gathering the planet’s genetic material, engineering species capable of meeting the challenges of extreme climate, and redeploying these to regreen the world.

The Observatory drew inspiration from the sundry architectures of planetary visualisation of the past century and a half. From building-scale panoramic “great globes” to interactive games of planetary resource use, architectural projects at the scale of the world envisioned designerly ways of seeing, understanding, and shaping Earth. Many of these projects posited not only a particular world picture but a behavioural system for planetary interactions akin to Blumenberg’s world models. In this sense, the Observatory falls into a lineage of architecture that orients design toward a global scale. In surveying the range of world-scale architectural projects, Hashim Sarkis and Rio Salgueiro Barrio point out the “possibility of differentiating between totality and totalization”.[6] The implication is that in the Anthropocene, the systems presented by such world models are not necessarily controlling or coercive, but might be mutually constitutive with Earth itself.

Figure 3 – The Oval room of Teylers Museum as it appeared in the early nineteenth century. Wybrand Hendriks, De Ovale Zaal van Teylers Museum, c. 1800-1820. Image in the public domain.

Beyond the mutuality of system and planet, the form of the Observatory considers the codetermination between a collection of objects and the architecture that displays them. A particularly vivid example of collection-architecture co-determinacy are proto-modern cabinets such as the Oval Room of the Teylers Museum in Haarlem, Netherlands. Historian Geert-Jan Janse describes this singular space as “a room to hold the world”, not merely to house the miscellaneous contents of a world but to construct an architecture fitted to that world picture.[7] Opened in 1784, the Oval Room concentrated its collection into a single space that adopts the organisation of the collection itself, furnishing bespoke cabinetry for irregular objects and reflecting a specific collection taxonomy in its arrangement. The curved space presented no corners, its quasi-elliptical shape evoking the spherical contours of a planet. In this sense it resembled a panorama, a dramatic vista over a field of particulars in orchestrated and interconnected conversation.

Our aim for the Observatory was to extend the architectural type of a Teylers collection panorama with the informatic and multi-scalar view of simultaneous dimensions of planetary ecology. In this way, the historical type of the room world is set in dialogue with the contemporary rise of data science and artificial intelligence. The Observatory accomplishes this by making visible both newly engineered species and the network of human and machinic actors that collect, analyse and act to resuscitate Earth. It is a control room for bioremediation, showing and evolving a web-of-life datascape and the symbiotic interactions of ecosystems, plants, animals, bacteria, robots, and humans.

The Observatory space consists of two complimentary experiences: the Geoscope and the Nursery. The Geoscope is an information-rich global monitoring system that visualises the progress of bespoke species deployed to aid threatened biomes. It combines physical models of speculative species themselves with dynamic projection mapping to show symbiotic interconnections across scales, offering a trans-scalar view of the planet from global to microscopic. The Geoscope could be understood as a dynamic data panorama, or even an informatic world picture. But instead of presenting an instantaneous view of the world from a single perspective at a uniform scale, it presents a temporally unfolding and multi-scalar assemblage of imagery and data, stitched together into a unified sensorium.

Figure 4 – The data visualisations of the Geoscope, tracking the success of species across ecosystems. Certain Measures, 2022.

The Geoscope is not only a collection gallery but also a control room, a cockpit for the planet. As a control room, the Observatory sits adjacent to what anthropologist Shannon Mattern has called “urban dashboards”, or visualisations of real-time urban operations data.[8] When expanded to the room scale, they evolve into what she terms “immersive dashboards”: vast control rooms for city functions that resemble NASA’s Mission Control for spaceflight.[9] Mattern argues that the raison d’être for such rooms is “translating perception into performance, epistemology into ontology”.[10] Urban control rooms thus constitute and condition the subjects that interact with them, creating particular conventions of legibility and action. For Mattern, the “dashboard and its user had to evolve in response to one another”.[11] In the critical relationship between dashboard and intelligibility, a particular data organisation fosters a corresponding kind of intelligence in its observer.

Historian Andrew Toland argues that Mattern’s urban dashboards might naturally be extended to the scale of the planet.[12] “We can begin to imagine an enlargement from the real-time data and feedback loops of urban dashboards considered by Mattern towards a vast integrated and machine-directed system of environmental-sensing and response”.[13] He catalogs several initiatives, such as Microsoft’s “AI for Earth”, that fall comfortably within this genre of design. While he notes the aspiration for an “AI whole Earth dashboard”, Toland frames artificial intelligence in functional terms as a straightforward extrapolation of statistical data analysis. Yet in ethical terms, the idea of AI sentience or reflection – that an AI might come to its own conclusions about the state of the planet – is largely absent. The possibility that the dashboard could become an ethical agent in its own right remains an untested possibility.

Beyond Mattern’s urban dashboards and Toland’s AI for Earth, the Geoscope makes deliberate reference to Buckminster Fuller’s series of geoscopes or “mini-Earth” projects. Beginning from his first room-scale globe, constructed at Cornell University in 1952, through many variants into the 1970s, Fuller proposed augmented planetary models “wherewith humanity can see and read all the spherical data of the Earth’s geography … within the theater of local Universe events”.[14] In their most developed form, Fuller’s geoscopes were data-rich and mediatic portraits of planetary civilisation unfolding over time: “The Geoscope’s electronic computers will store all relevant inventories of world data arranged chronologically, in the order and spacing of discovery, as they have occurred throughout all known history”.[15] Fuller saw the geoscopes as a means to accelerate and intensify the viewing not only of natural phenomena like weather systems and geologic conditions but also of human activity like military deployments or mobility patterns. “With the Geoscope humanity would be able to recognize formerly invisible patterns and thereby to forecast and plan in vastly greater magnitude than before”.[16]

Curiously, Fuller ignored the living organisms within the biosphere except in their direct and extractive connection with agriculture. Thus, in deliberate riposte, our Geoscope sees the human technosphere in intimate dialogue with the biosphere, not as an extractive system but as a symbiotic relationship in which humans have a vital role. The Geoscope’s AI, which acts as an intermediary between technosphere and biospehere, scans specific locations – the Ganges River Delta, Antarctic Inland, the Empty Quarter of the Emirates, Canada’s Nunavut territory and so forth – for progress against climate catastrophe. As a central digital globe turns, it reveals new points of crisis, but also signs of hopeful recovery. It projects a protean and continuously changing view into the network of monitoring stations across the planet. The coordinating AI dynamically connects with a menagerie of human and nonhuman agents across biomes and nations – including drones, satellites and hybrid techno-biological sensors – which constantly collect samples, register progress, and meticulously rebuild the planet. This menagerie of agents complements the biological menagerie of newly-engineered species gestating within the Observatory. The coordinating AI slowly becomes more aware of human culpability for climate change – and its own fraught role in regreening. The Geoscope thus offers a glimpse into the expanding ethical consciousness of this AI.

Experientially, the Geoscope operates like closed-circuit television for the planet. It presents a cluster of video feeds that track the thriving species introduced by the HEAL institute on the one hand and the research of the scientists of the HEAL institute on the other. The myriad seeded species include, for example: a comb jelly super organism that signals danger by bioluminescent flashes; cryptobiotic wildflowers designed to hibernate in steppe and tundra regions; and fire-resistant trees with robust roots to resist infernal heat. At the same time, the Geoscope streams surveillance footage of scientists tirelessly working to enact the techniques of re-greening of the earth. These scientists engage with deployed species through forensic fieldwork and careful labwork. We even witness moments of painstaking analysis as they prepare samples for review of soil toxins, trace carbohydrates, and other critical biomarkers. In effect, this planetary CCTV invites visitors to join in the on-the-ground work of the HEAL institute.

Fig. 5. Examples of the species diorama presented in the Observatory. Certain Measures, 2022.

In the Nursery, the other half of the Observatory experience, visitors peer into incubators nurturing dozens of species that could revitalise a struggling planet. In collaboration with a geneticist, we designed over 80 species of plant, insect and animal, each with special characteristics designed to combat the environmental challenges of today and the future. Drawn from seven major ecosystems – desert, aquatic, arctic, forest, swamp, alpine and grassland – we imagined species such as nutrient jelly cacti, radiation-sequestering flowers, lipid-rich quinoa, and remediation coral designed to feed on microplastics and sequester heavy metals. To facilitate rapid repopulation of bird species, a portable multispecies egg incubator could be used to quickly reestablish biological diversity in previously inhospitable areas. At the microscopic scale, designer bacteria symbiotically support larger species and the broader biome. These bacteria include cancer-hunting and sunscreen-producing varieties, for instance. Enhanced with holographic data, profiles of each specimen reveal to visitors the details of the organism and its role in a remediated Earth.

Fig. 6. A biome incubator pod which combines several species. Certain Measures, 2022.

Like the Observatory itself, the model dioramas representing new species are in conscious dialogue with the dioramas and conventions of natural history museums: each cryptobiological species was meticulously researched, and is complete with a scientific name, specific climate-robust features, and estimated lifecycles. There is an encyclopedic impulse in their collection, an attempt to convey the variety and possibility of nature across its variegated climates. Some dioramas present assembled biomes, habitats in miniature that arrange numerous species in symbiotic constellation. In a sense, the dioramas are not only biological but agricultural: they display the implements and technology of cultivation and accelerated growth, and in this way also echo one of the earliest roles of museum dioramas, to educate on the process of machinic cultivation of nature.[17]

AI Diaries

The posthuman perspective of a sentient AI monitoring Earth in the Observatory raises strange questions about the subjectivity of the AI itself. Is this AI an overlord, servant, friend, or colleague? How would this agent come to terms with climate catastrophe and its role in the rebirth of the planet? How would its ethical consciousness unfold? What role would its human colleagues play in this awakening, and how might it perceive that role? What story would the AI tell about itself?

The logs of the AI’s interactions actually comprise an intimate journal of sorts, a glimpse into its ethical awakening. The AI communicates with the visitor and the network of remote agents through transmissions and messages akin to letters, and the AI is also receiving messages via its sensor network from myriad species – an interspecies communication between natural and artificial life. Taken collectively, these messages bear a surprising resemblance to the venerable literary form of epistolary fiction. An epistolary novel is a story that unfolds entirely through fictional letters, messages, or transmissions between its sundry characters, exposing their intimate thoughts and interpersonal connections. As a literary form, it was notably popular in the eighteenth century. The epistolary form has a particularly interesting connection to technology, science fiction and bioengineering, in that Mary Shelly’s Frankenstein is an epistolary novel. The epistolary form could even extend to electronic and machine-readable messages, such as Carl Steadman’s Two Solitudes, a 1995 novella told entirely through email exchanges.

In keeping with the panoramic nature of the Observatory itself, we combined the content of the epistolary AI novel with the format of a panoramic book, drawing on precedents like Ed Ruscha’s Every Building on the Sunset Strip.[18] While Ruscha constructed a linear panorama of an urban streetscape, we propose a linear panorama of the sequential scan of the entire Earth, including every new bioengineered species introduced to it. The resulting text fuses AI diary and panorama into a journal of exchanges between this AI and its various human interlocutors. This yet-to-be published book, tentatively titled Dispatches from a Verdant Tomorrow, tells the story of climate remediation from a nonhuman perspective, as one continuous scan of Earth’s biosphere.

Fig. 7. A view of the Nursery within the Observatory. Certain Measures, 2022.

A Future Archive of Fictions

In his critique of the globe as an epistemic model, philosopher Peter Sloterdijk distinguishes between the epistemic ramifications of observing the globe from the outside or from the inside. Seeing the globe from the outside – as with the Apollo Earthriseprovides an “all-collecting awareness … the thinker feels and understands what it means to ‘know’ everything, to see everything visible, to recognize everything … the very epitome of objectivity”.[19] In contrast, the interior view places “oneself at the absolute center”, in “ecstatic-circumspective concentricity”: presumably an experience of complete subjectivity.[20] Yet between inside and outside lies the world itself, a moment at which globe and observer are coincident, one embedded in and inhabiting the other. It is that moment of coincidence and embeddedness that the Observatory aims to make tangible.

Historian Benjamin Lazier notes a similar polarity between environment and globe that illustrates how mutually defining they have become:

“The globalization of the world picture is perhaps easier to discern when we consider a parallel slippage – from ‘environment’ to ‘globe’ as it is inscribed in the phrase ‘global environment.’ The term has become a platitude, even a ritual incantation. It is in truth a Frankenstein phrase that sutures together words referring to horizons of incompatible scale and experience. Environments surround us. We live within them. Globes stand before us. We observe and act upon them from without. Globes are things that we make. They are artifacts. Environments, at least in theory and in part, are not.”[21]

The Observatory sits at that threshold between globe and environment, oscillating between the two but also introducing a third possibility: an experience of situated habitation and networked action. Through intersecting practices of speculative design, biofutures, fiction and data visualisation, the Observatory represents a comprehensive simulation of a connected biotechnical ecology.

In their analysis of urban data visualisation installations, Nanna Verhoeff and Karin van Es describe the city as a “navigable archive” and, indeed, one might make the same claim about Earth itself through the instrument of the Observatory.[22] The Observatory is a device not only for measuring and dimensioning a planetary biological archive but also for cultivating new specimens and Earth itself as an organism. It is a staging area for an active engagement between myriad human and nonhuman actors with each other and Earth itself. It is the terminus of a planetary-scale nervous system but also a sentient agent of action. It is a medium of communication with the planet, a telephone to Earth, a device for engaging in dialogue with it and its inhabitants. The Observatory is a proving ground for a more humane humanity, a tool through which we might take stock of the future of Earth and of design itself.

References

[1] R. Poole, Earthrise: How Man First Saw the Earth (New Haven: Yale University Press, 2010).

[2] B. Lazier, “Earthrise; or, The Globalization of the World Picture,” American Historical Review, June 2011, 606.

[3] G. Sutton, The Experience Machine: Stan VanDerBeek’s Movie-Drome and Expanded Cinema (Cambridge: MIT Press, 2015).

[4] H. Blumenberg, “World Pictures and World Models,” in History, Metaphors, Fables: A Hans Blumenberg Reader, Kroll, Joe Paul, Fuchs, Florian, Bajohr, Hannes, eds. (Ithica: Cornell University Press,2020), 43.

[5] Ibid., 43.

[6] H. Sarkis, Roi Salgueiro Barrio and Gabriel Kozlowski, The World as an Architectural Project (Cambridge: MIT Press), 8.

[7] G-J Janse, A Room to Hold the World. The Oval Room at Teylers Museum (Amsterdam: Teylers Museum, 2011)

[8] S. Mattern, “Mission Control: A History of the Urban Dashboard”, Places Journal, March 2015, <https://doi.org/10.22269/150309>, accessed 09 June 2022.

[9] Ibid.

[10] Ibid.

[11] Ibid.

[12] A. Toland, The Learning Machine and the Spaceship in the Garden. AI and the design of planetary ‘nature’ RA. Revista de Arquitectura Núm. 20 (2018), 216–227

[13] Ibid., 225.

[14] R. Buckminster Fuller, The Critical Path (New York: St. Martin’s Press, 1981), 172.

[15] Ibid., 180.

[16] Ibid., 183.

[17] J. Insley, “Little Landscapes: Agriculture, Dioramas, and the Science Museum,” Icon, 12 (2006): 8.

[18] E. Ruscha, Every Building on the Sunset Strip (Los Angeles: E. Ruscha, 1966).

[19] P. Sloterdijk, Spheres Volume 2: Globes (Pasadena: Semiotext(e), 2014), 85.

[20] Ibid., 88.

[21] B. Lazier, “Earthrise; or, The Globalization of the World Picture,” American Historical Review, June 2011, 614-615.

[22] N. Verhoeff and K. van Es, “Situated Installations for Urban Data Visualization: Interfacing the Archive-City”, in Visualizing the Street: New Practices of Documenting, Navigating and Imagining the City, P. Dibazar and J. Naeff, eds (Amsterdam: Amsterdam UP, 2018).

Suggest a Tag for this Article
Figure 9 – Climate Squatters Community (The Bartlett AD RC 1, 2021-22, Project: Climate Squatters, Team 2)
Figure 9 – Climate Squatters Community (The Bartlett AD RC 1, 2021-22, Project: Climate Squatters, Team 2)
The Apparatus of Surveillance  
Algorithmic, Apparatus, Biopower, Climate Migrants, Necropolitical, Public Engagement in the Apparatus
Nora Aldughaither

norah.aldughaither.21@ucl.ac.uk
Add to Issue
Read Article: 3769 Words

Climate Migrants in the Algorithmic Age 

Technological developments have induced the parallel discourse of the bond between ethics, exploitation and data. Advancements in technology have allowed for a contemporary form of resource extraction and appropriation, normalising the extractive practices of data resources from users, often without their knowledge. Through our increased dependence on technology and connected devices, we are faced with the ubiquitous effects of an algorithmic mode of governance operating on predictive processes that limit our options and control our choices. Indeed, data provides progress and development while simultaneously controlling, governing and abandoning. The algorithmic influence creates new concentrations of power in the hands of institutions and corporate entities that own and collect data.[1] 

“It is no longer enough to automate information flows about us; the goal now is to automate us.”[2] 

A planetary-scale disaster is looming, falling unevenly on the unprivileged of the world, displacing them due to its impacts on their territory. This catastrophic event will create large numbers of climate migrants who will simultaneously face the obstacles of our modern world’s algorithmic governance. Climate change is a planetary problem, but its consequences are felt differently around the world, creating a climate injustice, as some areas, especially in the global south, are more vulnerable than others (Figure 1). “We face the ugly reality of planetary scale ecological disaster, one that is falling unevenly on the world’s underprivileged and dispossessed populations.”[3] 

Today’s concern is about those who represent the margins of society, such as refugees and climate migrants, who struggle to function under this new mechanism of algorithmic domination. Since they are perceived as incalculable, it will place discriminatory impacts on their habitability by utilising methods of exclusion that are biased towards the system, creating controlled spaces based on an algorithm marked by segregation and surveillance. They have been exposed to extraction and predation but are later drained and excluded; reducing people who have been exhausted to mere data, as their behaviours, desires and dreams become predictable, thus making them expendable.[4] These governance technologies produce new power instruments that facilitate modes of prediction and calculation, which treat life as an object calculable by computers.[5] 

The research will explore the necropolitical impacts of an algorithmic governance on climate migrants. It will then investigate the notion of the apparatus and how digital technologies extend Michael Foucault’s idea of the apparatus as a tool for capturing and controlling. Since technology has the quality of being planetary, this research will speculate on the role of a participatory digital system in the lives of climate migrants, following the Fun Palace principles, which aim to operate on autonomous and non-extractive policies and the opposition to surveillance and control.  

Figure 1 – Dotdotdot, Planet Calls – Imaging Climate Change (2021), Museum of Art, Architecture and Technology, Lisbon. 

Necropolitical Effects on Climate Migrants 

Novel resource extraction and exploitation practices have emerged with technological acceleration, where data is considered a vital material to harness. Usman Haque asserts that the addiction of collecting more data to make the algorithm work better leaves behind a surplus of the population who are reduced to matter.[6] Data is often extracted from people and consumed by institutions to be utilised and commodified, “reducing all that exists to the category of objects and matter”, according to Achille Mbembe’s notion of Necropolitics.[7] The governance mode is shifting from humans to technology that can dehumanise people, turn them into data-producing tools, and reduce others who are deemed surplus into superfluous bodies, abdicating any responsibility towards them.[8] This is a mode of authority that leaves behind a portion of the population deemed useless, including climate migrants, who are incapable of being exploited under this mode of governance that is dependent on user-generated data. Threatened by climate-induced catastrophes, these climate migrants fled, as their part of the world has become inhospitable, occupying an in-between borderland space incapable of navigating the contemporary world of algorithmic governance. 

Ezekiel Dixon-Román states that algorithms examining our data shape and form our lives.[9] The raw data extracted is analysed by processes that are owned by companies and then relayed back to humans, making them passive receptors with minimal participation. This creates a system that breaks what we perceive as necessary, reduces our perspectives, and transforms humanity into the category of matter and objects, in what Mbembe defines as Brutalisme.[10] Mbembe draws this term from architecture to describe a process of transforming humanity and reducing it into matter and energy. As technology threatens to change people’s perceptions and turn them into artefacts through processes of exploitation, appropriation and Brutalisme, we confront the necropolitical consequence of what the algorithm deems as superfluous in the algorithmic age, which is reducing humans to a state where they are expendable. It is through Brutalisme that Necropolitics is being actualised. 

Haque argues that institutions have a growing tendency to abdicate responsibility for the sake of decisions generated by the algorithm,[11] but this poses a considerable concern when employed in necropolitical systems that decide who lives and who dies. As in the case of self-driving military drones, Rosi Baraidotti echoes the worry, stating that in the Netherlands military academy they are deeply concerned about the code of conduct of drone firing.[12] Humans are reduced to pixels on a screen, where missiles are fired to eliminate a pixel on a grid. What happens when Necropolitics is adopted in the digital world is what Ramon Amaro describes in the process of an algorithmic design; there will always be a contingency, indicating that something or someone will be left behind.[13] That occurs through a process of optimisation or the skilful removal of waste, whether that waste is time, effort or human.[14] The algorithmic process will mostly fail to consider climate migrants who have been displaced due to the calamities of anthropogenic climate change on their territory, thus making it uninhabitable.  

Biopower Tool 

This algorithmic governance is operated by digital devices, a form of apparatus of surveillance and control. Apparatus in this discourse references both Foucault’s definition and Giorgio Agamben’s interpretation – a translation of the French word dispositif, used by Foucault in 1970 to describe “a series of discourses, institutions, architectural forms, regulatory decisions, … that work as a technology of power and subjectivation”.[15] Agamben further describes apparatus as “anything that has in some way the capacity to capture, orient, determine … the gestures, behaviours or discourses of living beings”.[16] He does not limit it to instruments whose connection with power is evident but also includes computers and cellular telephones, amongst others. 

Digital devices function as an apparatus by capturing our data and controlling our behaviours, operating as an instrument of power in the hands of the people who own this algorithmic mode of governance. In Foucauldian terms, they are a form of disciplinary tool and a biopolitical technique of “subjectivation” that appeared from the capitalist regime to place a novel model of governmentality on the people. Thus, a new form of capitalism appears, filled with control apparatuses in the hands of the powerful few, as the technologies of this capitalistic culture have the power to become embedded in our body, capturing our behaviours and controlling our actions. “Foucault claims that a dispositif creates its own new rationality and addresses urgent needs.”[17] These needs are apparent, as capitalist institutions aim to collect more data, monetising from people’s lives, with the excuse of providing a better service. 

Public Engagement in the Apparatus 

Data collection and extraction is a massive profit to data collectors that sometimes comes at the users’ expense; the power of algorithmic authority should be used to facilitate justice, autonomy and transparency. The focus is on exploring a participatory system, responding to the extractive technologies and how they progressively influence the lives of vulnerable individuals such as climate migrants. Adopting these practices would allow for co-designing future digital technologies that would otherwise stand in the way of mobility. Participation should be an extensive involvement and contribution – such as in the “Fun Palace” concept by architect Cedric Price, where the users became the designers. A similar approach could be utilised in a participatory system where climate migrants could be more involved in the systems that dictate their future. 

Exploring a Virtual Fun Palace 

The Fun Palace is a social experiment which opposes those forms of social control that inevitably influence the usage of public spaces. Exploring a participatory system that could ensure autonomy and flexibility by analysing the application of the Fun Palace’s principles virtually is required. Its fundamentals could permit autonomy, thus undermining current structures of power and control. Digital platforms could apply the same notion of accessibility, flexibility and autonomy to the user, and oppose control and surveillance. Technologies that underpin current forms of control could allow novel methods of cooperation if their use were to transform.[18] 

Price pioneered the integration of recent technologies to inform his architecture; however, in this case, the Fun Palace can be used to inform technology. Price’s concept aimed to use a bias-free technology that learns solely from its users, not for profit gain but for participation and transparency – creating a participatory architecture with the ability to respond to its users’ needs and desires: “His design for the Fun Palace would acknowledge the inevitability of change, chance and indeterminacy by incorporating uncertainties as integral to a continuously evolving process modelled after self-regulating organic processes and computer codes.”[19] 

Cybernetics and Indeterminacy 

Price enrolled Gordon Pask, an expert cybernetician, whose involvement in the Fun Palace allowed Price to achieve his goals of a new concept that integrated his interest in change and indeterminacy.[20] Pask was interested in underspecified and observer-constructed goals that oppose the goals of technologies of control. The Fun Palace program accommodated change, as it could anticipate unpredictable phenomena that did not rely on a determined program.[21] These methods of granting freedom, participation and sharing scientific knowledge to the users were meant to overrule authoritarian control for the sake of an autonomous one.  

Adaptability and flexibility in responding to users’ needs required cybernetics for participants to communicate with the building (Figure 2). Pask’s conversation theory was the essence of the program, moving a step closer to authentic autonomy in a genuinely collaborative system.[22] Underspecified goals oppose systems where the designer initially programs all parts and behaviours of a design, limiting the system’s functions to the designer’s prediction of deterministic goals. Predetermined systems keep the user under the control of the machine and its preconfigured system, since they can only respond to pre-programmed behaviour. These systems eliminate the slight control users have over their surroundings and necessitate that they instead put their trust in the assumptions of the system’s designers.[23] 

Currently, as Haque states, “Pask’s Conversation Theory seems particularly important because it suggests how, in the growing field of ubiquitous computing, humans, devices and their shared environments might coexist in a mutually constructive relationship”.[24] A model that ensures the collective goals of users are reached through their direct actions and behaviours – and that those goals are desired and approved by the users – is the kind of model that digital technologies should aim for. The program of the Fun Palace was autonomous in that there was no authoritative hierarchy that dictated the program and space usage.  

Transparency, Control and Participation 

Designed as a machine with an interactive and dynamic nature, the Fun Palace implemented novel user participation and control applications. Cybernetician Roy Ascott proposed the “Pillar of Information”, which was an accessible electronic kiosk placed at the entrance that could search for and reveal information. “This system was among the earliest proposals for public access to computers to store and retrieve information from a vast database.”[25] As implemented in the Fun Palace, “a cybernetic approach does not reject or invalidate the use of data; instead, it suggests that a different role for data needs to be perceived in the process of intervening in disadvantages and creating social change”.[26] 

Price’s concern related to the effect architecture had on its users. He was convinced that it should be more than a shelter containing users’ activities, being also a supporter of them, with the users’ emancipation and empowerment as its true objectives. The control is thus shifted from the architects to the users, allowing the users to be responsible for constructing the world around them. Digital technologies should not divert their objective of ensuring convenience and empowering the people for the sake of data extraction for profit, surveillance and control.  

Climate Migrants in a Participatory System  

A platform cooperative for climate migrants that aims to ensure the interest of all, and to increase transparency and democracy, would be a departure from the extractive and authoritative system. A participatory and open digital design would allow the freedom of climate migrants from the restraints of their preconceived, biased, incorrect digital profiles created by algorithms. This system would contribute to the rise of autonomy, privacy and freedom for climate migrants. It would be a cooperative, transparent and user-centred approach for seeking common objectives that minimises concerns about profiling, collection of personal data and surveillance. 

Climate Squatters 

The implementation of a virtual participatory platform for climate migrants was explored in the design project “Climate Squatters” by The Bartlett AD Research Cluster 1, 2021-22, Team 2. Climate migrants from the village of Happisburgh would utilise a participatory digital platform that enables them to travel intelligently as modern squatters, allowing them to be active agents in their relocation, habitation and migration process. A non-extractivist and autonomous communal unity without fixed habitation, the project forms around the idea of granting climate migrants autonomy, flexibility and empowerment in their continuous relocation process triggered by the existential threat of coastal erosion. Climate Squatters’ platform aims to address the issues of decreased ownership and control by reconceptualising the user’s roles, acting as an active contributor in the process.  

Happisburgh is a village on the eastern coast of the United Kingdom. It lies in one of the most dangerous areas of coastal erosion in the UK, where it is estimated that Happisburgh will lose around one hundred metres of its coastal land during the next twenty years (Figure 5). The erosion rate has significantly increased due to rising sea levels and climate change. The current governmental coastal management plan is No Active Intervention, which means no investment will be made in defending against flooding or erosion. This plan signifies that there is no sustainable option for coastal defences, due to current coastal processes, sea level rise and national policy, which fails to respond to the people’s needs and makes them feel disregarded.

Figure 5 – Happisburgh Coastal Erosion (The Bartlett AD RC 1, 2021-22, Project: Climate Squatters, Team 2).

Using Climate Squatters’ platform would empower the climate migrants in the various aspects of the migration process. The platform allows autonomy by granting the users the option to participate in the process and vote on where they would like to relocate from a list of suitable land options. Placing a heavy value on the community, the platform starts by decoding the village’s typology, material and identity using machine learning. Happisburgh is “decommissioned” by disassembling what is salvageable from the houses into voxelised masses. The constant migration of the climate squatters requires a unique construction that optimises space and material and allows for easy assembly and disassembly. The recoding of the future habitat of climate migrants operates by utilising wave function collapse to generate their new typologies. The live platform will also sustain the community by analysing relevant incentives and taking advantage of them, giving the users a live view of their performance and future expectations to maintain or enhance their position. 

Figure 6 – Decoding with Heatmaps and Machine Learning (The Bartlett AD RC 1, 2021-22, Project: Climate Squatters, Team 2).
Figure 7 – Beyond Voxels (The Bartlett AD RC 1, 2021-22, Project: Climate Squatters, Team 2)
Figure 7 – Beyond Voxels (The Bartlett AD RC 1, 2021-22, Project: Climate Squatters, Team 2).
Figure 8 – Platform House Generation and Allocation (The Bartlett AD RC 1, 2021-22, Project: Climate Squatters, Team 2).

The platform aims to instil trust in the user and grant them autonomy and flexibility by operating as a non-extractive tool, without predetermined goals, that will empower the user in their journey and ensure their secure habitation in a world of uncertainties. It also aims to learn from the users’ behaviours and to operate on a method of buildable knowledge, continuously evolving based on users’ objectives. By redistributing the roles between the users and the platform, the model ensures that the platform will function as an enabler and supporter of the user. Following Price’s model, the employment of uncertainty and indeterminacy would help climate migrants navigate a journey filled with unpredictable events, thus advancing the dialogue between users and the digital platform. Climate Squatters’ platform seeks to enhance autonomy, flexibility and freedom, and to create a community of climate squatters that represent a response to an ever-changing world due to the consequences of climate change. 

Figure 9 – Climate Squatters Community (The Bartlett AD RC 1, 2021-22, Project: Climate Squatters, Team 2)
Figure 9 – Climate Squatters Community (The Bartlett AD RC 1, 2021-22, Project: Climate Squatters, Team 2).

Digital technologies could challenge traditional models that place a dichotomy between designer and user. Instead, a method can be realised where the user can take a primary role within the system in which they participate, contrasting the prevailing approach of predefined and predetermined systems that restrict the users. “It is about designing tools that people themselves may use to construct – in the broadest sense of the word – their environments and, as a result, build their own sense of agency.”[27] The control is then transferred to the users, where the users are responsible for constructing the world around them. 

Utilising the Fun Palace principles in digital technologies will benefit climate migrants by delivering them a neutral and virtual space to navigate the world without the intrusion of biased algorithms. Non-extractive technologies will prove helpful for climate migrants as they aim to be mobile once climate change has rendered their current home unfit for habitation. Giving the users control of their data will create a transparent digital platform to counter the current extractive and control apparatus. 

A new platform cooperative for climate migrants should be considered to protect their future with transparency, empowerment and equality. Centred around bias elimination and avoiding the harvesting of personal data, this new system would prove more beneficial than capitalism’s current apparatus. This method could enable new modes of freedom, security and emancipation for climate migrants; a system that reduces data extraction, exploitation and bias, promoting a safe, flexible and autonomous approach. A participatory method could potentially alter the biased and surveillance-ridden systems that dominate the digital world. 

References 

[1] A. Mbembe, Theory in Crisis Seminar “Notes on Brutalism” (online), 2020 (accessed 22 November 2021). Available from: https://www.youtube.com/watch?v=tc34afvyL68.

[2] S. Zuboff, The Age of Surveillance Capitalism (London: Profile Books, 2019), 8. 

[3] L. Likavčan, Introduction to Comparative Planetology (Moscow: Strelka Press; 2019), 11. 

[4] J. Confavreux, “Long Read | Africa: Strength in reserve for Earth” (online), New Frame, 2020 (accessed 26 November 2021). Available from: https://www.newframe.com/long-read-africa-strength-in-reserve-for-earth.

[5] A. Mbembe, Theory in Crisis Seminar “Notes on Brutalism” (online), 2020 (accessed 22 November 2021). Available from: https://www.youtube.com/watch?v=tc34afvyL68.

[6] U. Haque, “Big Bang Data: Who Controls Our Data?” (online), Somerset House, 2016 (accessed 25 November 2021). Available from: https://www.mixcloud.com/SomersetHouse/big-bang-data-who-controls-our-data-usman-haque-debates-the-implications-of-the-data-explosion.

[7] S. Bangstad, T.T. Nilsen, A. Eliseeva, “Thoughts on the planetary: An interview with Achille Mbembe” (online) New Frame. 2019 (accessed 26 November 2021). Available from: https://www.newframe.com/thoughts-on-the-planetary-an-interview-with-achille-mbembe.

[8] A. Mbembe, Necropolitics (Durham: Duke University Press, 2019), 97. 

[9] E. Dixon-Román, “Algo-Ritmo: More-Than-Human Performative Acts and the Racializing Assemblages of Algorithmic Architectures”, Cultural Studies Critical Methodologies, 2016, 16 (5), 482-490. DOI: https://doi.org/10.1177/1532708616655769.

[10] A. Mbembe, Theory in Crisis Seminar “Notes on Brutalism” (online), 2020 (accessed 22 November 2021). Available from: https://www.youtube.com/watch?v=tc34afvyL68.

[11] U. Haque, “Big Bang Data: Who Controls Our Data?” (online), Somerset House, 2016 (accessed 25 November 2021). Available from: https://www.mixcloud.com/SomersetHouse/big-bang-data-who-controls-our-data-usman-haque-debates-the-implications-of-the-data-explosion.

[12] R. Braidotti, “Posthuman Knowledge” (online), Harvard GSD, 2019 (accessed 24 November 2021). Available from: https://www.youtube.com/watch?v=0CewnVzOg5w.

[13] R. Amaro “Data Then and Now” (online), University of Washington, 2021 (accessed 29 November 2021). Available from: https://www.youtube.com/watch?v=uEX8JI6Xntk

[14] Ibid. 

[15] P. Preciado, Pornotopia (Zone Books, 2014). 

[16] G. Agamben, “What Is an Apparatus?” and Other Essays (Stanford University Press, 2009). 

[17] S. Lee, “Architecture in the Age of Apparatus-Centric Culture” (online) TU Delft, 2014 (accessed 2 February 2022). Available from: https://repository.tudelft.nl/islandora/object/uuid:fa31ddf9-a227-48e8-a3eb-1f5ca7e39010/datastream/OBJ1/download.

[18] M. Lawrence, “Control under surveillance capitalism: from Bentham’s panopticon to Zuckerberg’s ‘Like’” (online), Political Economy Research Centre, 2018 (accessed 29 January 2022). Available from: https://www.perc.org.uk/project_posts/control-surveillance-capitalism-benthams-panopticon-zuckerbergs-like.

[19] S. Mathews, “The Fun Palace as Virtual Architecture” (online), Journal of Architectural Education, 2006, 59 (3), (accessed 8 February 2022), 39-48, 40. 

[20] Ibid, 40. 

[21] Ibid, 44. 

[22] U. Haque, “The Architectural Relevance of Gordon Pask”, Architectural Design, 2007, 77 (4), 54-61, 58. Available from: https://www.haque.co.uk/papers/architectural_relevance_of_gordon_pask.pdf.

[23] Ibid, 60. 

[24] Ibid, 55. 

[25] S. Mathews, “The Fun Palace as Virtual Architecture” (online), Journal of Architectural Education, 2006, 59 (3), (accessed 8 February 2022), 39-48, 45. 

[26] G. Bell, M. Gould, B. Martin, A. McLennan, E. O’Brien, “Do more data equal more truth? Toward a cybernetic approach to data,” Australian Journal of Social Issues, 2021, 56 (2), 213-222, 219. 

[27] U. Haque, “The Architectural Relevance of Gordon Pask”, Architectural Design, 2007, 77 (4), 54-61. Available from: https://www.haque.co.uk/papers/architectural_relevance_of_gordon_pask.pdf.

Suggest a Tag for this Article
Figure 1 – Perspective image of an isolated agropalace implanted on a flooded topography. Image: Alejandro Eliseo Cibello, Sofia Giayetto, Ornella Martinelli, Pedro Rovasio and Candela Valcarcel, School of Architecture and Urban Studies, UTDT, 2022.
Figure 1 – Perspective image of an isolated agropalace implanted on a flooded topography. Image: Alejandro Eliseo Cibello, Sofia Giayetto, Ornella Martinelli, Pedro Rovasio and Candela Valcarcel, School of Architecture and Urban Studies, UTDT, 2022.
Biomatic Agropalaces: Overflowing Vermiform Artefacts
Artifices, Biomatic, Ecological Fiction, Post-Anthropocentric, Vermiform
Sofia Giayetto, Alejandro Eliseo Cibello, Ornella Martinelli, Pedro Ariel Rovasio Aguirre, Candela Valcarcel

sofigiayetto@gmail.com
Add to Issue
Read Article: 3614 Words

At present, we find ourselves in a critical instance: the current rate of food production is impossible to maintain in the face of the climate threat and new forms of social organisation have not yet been implemented to solve the problem. This project constitutes a possible response to the conditions we will inevitably soon be facing if we do not develop sustainable ways of life that promote coexistence between species. 

The construction of a new paradigm requires the elimination of current divisions between the concepts of “natural” and “artificial”,[1] and consequently the differentiation of the human from the rest of the planet’s inhabitants. This post-anthropocentric vision will build a new substratum to occupy which will promote the generation of an autarchic ecology based on the coexistence between living and non-living entities. 

The thesis extends through three scales. The morphology adopted in each scale is determined by three parameters simultaneously. First, climate control through water performance; second, the material search for spaces that allow coexistence; and lastly, the historical symbolism to which the basilica typology refers. 

On a territorial scale, the project consists of the generation of an artificial floodable territory occupied by vermiform palaces which are organised in an a-hierarchical manner as a closed system and take the form of an archipelago. 

On the palatial scale, water is manipulated to generate a humidity control system that enables the recreation of different biomes inside the palaces through the permeability of their envelope. 

Finally, on a smaller scale, the architecture becomes more organic and flexible, folding in on itself to constitute the functional units of the palaces, which aim for agricultural production, housing needs and leisure; the function of each unit depends on its relationship with water and its need to allow passage and retain it. 

The entire project takes form from, on the one hand, the climatic situations that each palace requires to house its specific biome, and, on the other hand, the spatial characteristics required by the protocols that are executed in it. To allow the development of a new kind of ecology, the architecture that houses the new protocols of coexistence will be: agropalatial, a-hierarchical, sequential, stereotomic, and overflowing. 

In the following chapters, we will develop in depth the architectural qualities mentioned above. 

Post-Anthropocentric Ecologies: Theoretical Framework

We are currently living in the era of the Anthropocene,[2] in which humans are considered a global geophysical force. Human action has transformed the geological composition of the Earth, producing a higher concentration of carbon dioxide and, therefore, global warming. This process began with the first Industrial Revolution, although it was only after 1945 that the Great Acceleration occurred, ensuring our planet’s course towards a less biologically diverse, much warmer and more volatile state. The large-scale physical transformations produced in the environment through extractive practices have blurred the boundaries between the “natural” and the “artificial”. 

In Ecology Without Nature,[3] Morton raises the need to create ecologies that dismiss the romantic idea of ​​nature as something not yet sullied by human intervention – out of reach today – and go beyond a simple concern for the state of the planet, strengthening the existing relationships between humans and non-humans.

In this line of thought, we reject the concept of “nature” and consider its ecological characteristics to be reproducible through the climatic intelligence of greenhouses. These ecologies should be based on a principle of coexistence that not only allows but celebrates diversity and the full range of feelings and sensibilities that it evokes. 

According to Bernard Tschumi,[4] the relationship between the activities and the shape of the building can be one of reciprocity, indifference, or conflict. The type of relationship is what determines the architecture. In this thesis, morphology is at the service of water performance, hence why the activities that take place inside the agropalaces must redefine their protocols accordingly. 

Agropalatial Attribute

Palaces are large institutional buildings in which power resides. Their formal particularities have varied over time. However, some elements remain constant and can be defined as intrinsic to the concept of a palace, such as its large scale, the number of rooms, the variety of activities which it houses and the ostentation of luxury and wealth. 

In the historical study of palaces, we recognised the impossibility of defining them through a specific typology. This is because their architecture was inherited from temples, whose different shapes are linked to how worship and ceremonies are performed. It is, therefore, possible to deduce that if there are changes in the behaviour of believers, this will generate new architectural needs. 

In the same way that architecture as a discipline has the potential to control how we carry out activities based on the qualities of the space in which they take place, our behaviours also have the power to transform space since cultural protocols configure the abstract medium on which organisations are designed and standards of normality are set up.[5] The more generic and flexible these spaces are, the longer they will last and the more resilient they will be.  

The agropalace carries out a transmutation of power through which it frees itself from the human being as the centre and takes all the entities of the ecosystem as sovereign, understanding cohabitation as the central condition for the survival of the planet and human beings as a species. 

The greenhouse typology appears as an architectural solution capable of regulating the climatic conditions in those places where there was a need to cultivate but where the climate was not entirely suitable. Agropalaces can not only incorporate productive spaces but generate entire ecosystems, becoming an architecture for the non-human. 

We take as a reference the Crystal Palace. The Crystal Palace was designed for the London International Exhibition in 1851 by Joseph Paxton. The internal differentiation of its structural module, the height and the shape of its roof generate architectural conditions that shape it as a humidity-controlling container, which allows us to use it as the basis of our agropalatial prototype. 

Our prototype based on the Crystal Palace is designed at first as a sequence of cross-sections. Their variables are the width and height of the section, the height and width of the central nave, the slope of the roof, the number of vaults, an infrastructural channel that transports water and, finally, the encounter with the floor. Each of these variables contributes to regulating the amount of water that each biome requires.

A-hierarchical Attribute 

The territorial organisation of the agropalaces must be a-hierarchical for coexistence to take place. Cooperation between agropalaces is required for the system to function. This cooperation is based on water exchange from one palace to the other. For this to occur, vermiform palaces must be in a topography prone to flooding, organised in the form of an archipelago. 

The prototype project is located in the Baix Llobregat Agrarian Park in Barcelona, which is crossed by the Llobregat river ending up in a delta in the Mediterranean Sea. The Agrarian Park currently grows food to supply to all the neighbouring cities. Our main interest in the site lies in its hydrographic network which is fundamental in the construction of the archipelago since the position of each agropalace depends on its distance to its closest water source.  

To create a humidity map to determine the location of the palaces on the territory we use a generative adversarial network (GAN). A GAN is a type of AI in which systems can make correlations between variables, classify data and detect differences and errors between them through the study of algorithms. Their performance is improved as they are supplied with more data. 

The GAN is trained with a dataset of 6000 images, each of them containing 4 channels of information in the form of coloured zones.[6] Each channel represents the humidity of a specific biome. The position of the coloured zones is related to the distance to the water sources that each biome requires. The GAN analyses every pixel of the images to learn the patterns of the position of the channels and to create new possible location maps with emerging hybridisation between biomes. 

The first four biomes are ocean, rainforest, tundra, and desert. Our choice for these extreme ecologies is related to the impact that global warming will have on them and the hypothesis that their hybridisation will produce less hostile and more habitable areas.  

We conclude that the hybridisation made by AI is irreplaceable by human methods. As such, we consider AI part of the group of authors, even though a later curation of its production is carried out, constituting a post-anthropocentric thesis from its conception. 

Figure 2. Matrix of the outputs of each one of the main biomes and its complete result. Image: Alejandro Eliseo Cibello, Sofía Giayetto, Ornella Martinelli, Pedro Rovasio and Candela Valcarcel, School of Architecture and Urban Studies, UTDT, 2022.
Figure 2 – Matrix of GAN outputs. Left: Four images per channel; from left to right and from top to bottom: Ocean, Rainforest, Tundra and Desert. Right: Four outputs of complete humidity maps with their nine emerging biomes. Image: Alejandro Eliseo Cibello, Sofia Giayetto, Ornella Martinelli, Pedro Rovasio and Candela Valcarcel, School of Architecture and Urban Studies, UTDT, 2022. 

Due to the hybridisation, a gradient of nine biomes and their zones within the territory are recognised in the GAN outputs. These are, from wettest to driest: ocean, wetland, yunga, rainforest, forest, tundra, grassland, steppe, and desert. The wetter palaces will always be located at a shorter distance from the water supply points while the drier ones will be located closer to the transit networks. The GAN not only expands the range of a variety of biomes but also gives us unexpected organisations without losing respect for the rules previously established.  

The chosen image is used as a floor plan and allows us to define the palatial limits, which are denoted by changes in colour.  

The territory, initially flat, must become a differentiated topography so that the difference in the heights of the palaces eases access to water for those that require greater humidity. 

Figure 3 – Construction of the differentiated field of palaces based on the AI results. From top to bottom: Definition of zones of each biome. Generation of axis inside each boundary. Location of cross-sections from the agropalatial prototype. Extrusion of cross-sections forming the outer envelope of each agropalace. Image: Alejandro Eliseo Cibello, Sofia Giayetto, Ornella Martinelli, Pedro Rovasio and Candela Valcarcel, School of Architecture and Urban Studies, UTDT, 2022. 

The palaces are linear, but they contort to occupy their place without interrupting the adjoining palaces, following the central axis of the zone granted by the GAN.  

This territorial organisation, a-hierarchical, longitudinal and twisted, forms two types of circulations: one aquatic and one dry. The aquatic palaces tend to form closed circuits, without specific arrival points. An idle circulation, unstructured, designed to admire the resulting landscape of canyons. The other, arid, runs through desertic palaces along its axis and joins the existing motorways in the Llobregat, crossing the Oasis. 

Stereotomic Attribute 

The protocols of the post-Anthropocene must exist in a stereotomic architecture, a vast and massive territory, almost undifferentiated from the ground. 

As mentioned above, our agropalatial prototype is designed as a sequence of cross-sections. Each section constitutes an envelope which formal characteristics are based on that of the Crystal Palace and modified concerning its need to hold water. 

The determination of the interior spaces in each section depends on the fluxes of humidity necessary for generating the biome. The functional spaces are the result of the remaining space between the steam columns, the number of points where condensed water overflows towards the vaults, and the size of the central circulation channel.  

The variation in organisation according to the needs of each biome creates different amounts of functional spaces, of different sizes and shapes, allowing the protocols to take place inside of them.  

The interstices where the fluxes of humidity move are organised in such a way that the forces that travel through the surfaces of the functional spaces between them reach the ground on the sides of the palace, forming a system of structural frames.  

Sequential Attribute  

The functional spaces in each cross-section are classified into three categories corresponding to the main protocols that take place inside of the agropalaces: production, housing and leisure. 

The classification depends on the size, shape, distance to light and water of each functional space, predicting which one would be more convenient to house each protocol. Every cross-section contains at least one functional space of each kind. 

These two-dimensional spaces are extruded, generating the “permanent” spaces, in which the activities are carried out. These form connections with the “permanent” spaces of the same category of the subsequent cross-section, forming “passage” spaces.  

Thus, three unique, long, complex spaces – one for each protocol – run longitudinally through the palaces, in which activities are carried out in an interconnected and dynamic way. The conservation protocol – the biome itself – is the only non-sequential activity, since it is carried out in the interstice between the exterior envelope of the agropalace and the interior spaces. 

Figure 4. Section. Image: Alejandro Eliseo Cibello, Sofía Giayetto, Ornella Martinelli, Pedro Rovasio and Candela Valcarcel, School of Architecture and Urban Studies, UTDT, 2022.
Figure 4 – Left: Longitudinal Section of an Agropalace that holds a Tundra biome. Right: Variations of the cross-sections–in pink: humidity fluxes. Image: Alejandro Eliseo Cibello, Sofia Giayetto, Ornella Martinelli, Pedro Rovasio and Candela Valcarcel, School of Architecture and Urban Studies, UTDT, 2022. 

Protocols

The need for production has made cities and agricultural areas hyper-specialised devices, making their differences practically irreconcilable. However, we understand that this system is obsolete, which is why it is necessary to emphasise their deep connection and how indispensable they are to each other.  

For this reason, agropalaces work through the articulation of different scales and programs, considering the three key pillars on which we must rely to build a new post-anthropocentric way of life – ecological conservation, agricultural production and human occupation – the latter prioritising leisure. 

Protocol of Production 

From currently available methods, we take hydroponic agriculture as the main means of production, together with aeroponic agriculture since both replace the terrestrial substrate with water rich in minerals. 

The architectural organisation that shapes the agricultural protocol in the project is based on a central atrium that allows the water of the biome to condense and be redirected to the floodable platforms that surround it. In each biome, the density of the stalls, their depth, and the size of the central atrium vary in a linear gradient, ranging from algae and rice plantations to soybeans and fruit. The agricultural protocol in the agropalaces manages water passively, by surface condensation and gravity, generating a spiral distribution added to a central circulation that generates landscape while seeking to cultivate efficiently.

Figure 5. Interior Sections. Image: Alejandro Eliseo Cibello, Sofía Giayetto, Ornella Martinelli, Pedro Rovasio and Candela Valcarcel, School of Architecture and Urban Studies, UTDT, 2022.
Figure 5 – Diagrams and sections of functional spaces and their protocols in each biome. Image: Alejandro Eliseo Cibello, Sofia Giayetto, Ornella Martinelli, Pedro Rovasio and Candela Valcarcel, School of Architecture and Urban Studies, UTDT, 2022. 

Protocol of Housing 

In defining the needs for a House, Banham reduces it to an atmospheric situation, with no regard for its form.[7] However, the dispossession of formal conditions allows us to modify the current housing protocol, through the ability to project a house whose shape is the result of passive climatic manipulation and the need to generate a variety of spatial organisations that do not restrict the type of social nuclei. 

The spatial organisation of the house in the project is built through circulatory axes and rooms. The position of the circulatory axes and the number and size of the rooms vary depending on the biome, this time not based on humidity, but on the type of life that each ecological environment encourages. The height and width of the spaces also vary, generating the collision of rooms and thus allowing the formation of larger spaces or meta-rooms. The protocol of habitation in the agropalaces then allows a wide range of variation in which people are free to choose the form in which they wish to live, temporarily or permanently, individually or in groups. 

Protocol of Leisure

Leisure is one of the essential activities of the post-Anthropocene because it frees human beings from their proletarian condition, characteristic of current capitalism, and connects them with the enjoyment of themselves and their surroundings. The leisure protocol in the thesis consists of a series of slabs with variable depths that constitute pools at different levels, interconnected by slides, which are to varying degrees twisted or straight, steep or shallow, and covered or uncovered. 

The leisure protocol is based on the behaviour of water, which varies in each biome. The quantity, depth and position of the pools decrease in quantity the more desertic the biome that houses it is. In this way, water parks and dry staggered spaces are generated in which all kinds of games and sports are developed. In the agropalaces, contrary to being relegated to specific times and places, leisure becomes a form of existence itself.  

Overflowing Attribute 

Finally, to achieve coexistence, the architecture developed must be permeable.  All the layers that contribute to the complexity of the project exchange fluids – mainly water – with the environment. 

Water penetrates each of them, they use it to generate the desired ambient humidity for their biome and the excess then overflows on the roof. The system works sequentially, from the wettest to the driest biomes. Once the former palace overflows its residual water, the succeeding one can use it to its advantage until it eventually overflows again.  

Inside every palace, a sequence of overflows on an intra-palatial scale is generated. Humidity enters the agropalace through its internal channel, where it evaporates and rises until it condenses on the surfaces of the functional organs and thus penetrates them to be used in different activities. The residuary water evaporates again until it overflows. The process consists of a cyclical system with constant recirculation. 

The functional spaces’ envelopes have perforations in different sizes and positions to allow moisture to dissipate or condense as convenient. The overflowing quality of the system creates communication between the different scales of the architectural system, thus generating inter- and intra-palatial dependency. 

Figure 6. Water Performance Section. Image: Alejandro Eliseo Cibello, Sofía Giayetto, Ornella Martinelli, Pedro Rovasio and Candela Valcarcel, School of Architecture and Urban Studies, UTDT, 2022.
Figure 6 – Detail section of water performance in the agricultural protocol. Image: Alejandro Eliseo Cibello, Sofia Giayetto, Ornella Martinelli, Pedro Rovasio and Candela Valcarcel, School of Architecture and Urban Studies, UTDT, 2022. 

Post-Anthropocentric Architecture: Conclusion

The agropalace understands coexistence as a necessary condition for the survival of the planet and human beings as a species. This new typology presents agriculture as the principal tool of empowerment and suggests a paradigm shift in which each society can define its policies for food production, distribution and consumption; meanwhile, it produces ecosystemic habitats with specific microclimatic qualities that allow the free development of all kinds of entities. 

Biomatic Artefacts proposes an architecture whose forms do not interrupt the geological substrate but compose it, being part of the planetary ecology and simultaneously forming smaller-scale ecosystems within each palace and an autonomous ecosystem. 

The protocols of today disappear to make room for the formation of a single para-protocol, since, contrary to being carried out in a single, invariable way, it exists because it has the quality of always being different, vast in spatial, temporal, and atmospheric variations. And in its wake, it generates a landscape of canyons and palaces that, in the interplay of reflections and translucency of water and glass, allows us to glimpse the ecological chaos of coexistence within. 

We consider that the project lays the foundations for a continuation of ideas on agropalatial architecture and post-anthropocentric architecture, from which all kinds of new formal and material realities will come about. 

Figure 7. Axonometric. Image: Alejandro Eliseo Cibello, Sofía Giayetto, Ornella Martinelli, Pedro Rovasio and Candela Valcarcel, School of Architecture and Urban Studies, UTDT, 2022.
Figure 7 – Perspective image of a group of agropalaces placed in the flooded topography, forming an archipelago. Image: Alejandro Eliseo Cibello, Sofia Giayetto, Ornella Martinelli, Pedro Rovasio and Candela Valcarcel, School of Architecture and Urban Studies, UTDT, 2022. 

Acknowledgement

The following paper was developed within the institutional framework of the School of Architecture and Urban Studies of Torcuato Di Tella University as a project thesis, with Lluis Ortega as full-time professor and Ciro Najle as thesis director.

References

[1] T. Morton, Hyperobjects: Philosophy and Ecology after the End of the World (Minnesota, USA: University of Minnesota Press, 2013). 

[2] W. Steffen, P. Crutzen, J. McNeill, “The Anthropocene: Are Humans Now Overwhelming the Great Forces of Nature?”, AMBIO: A Journal of the Human Environment (2007), pp 614-621. 

[3] T. Morton, Ecology Without Nature: Rethinking Environmental Aesthetics (Cambridge, USA: Harvard University Press, 2007). 

[4] A. Reeser Lawrence, A. Schafer, “2 Architects, 10 Questions On Program, Rem Koolhaas + Bernard Tschumi” Praxis 8 (2010). 

[5] C. Najle, The Generic Sublime (Barcelona, España: Actar, 2016). 

[6] Set of base images with which the GAN trains by identifying patterns and thus learning their behaviours. In our case, the dataset is based on a set of possible biome location maps based on proximity to water sources and highways. 

[7] R. Banham, F. Dallagret, “A Home Is Not a House”, Art in America, volumen 2 (1965) pp 70-79.  

Suggest a Tag for this Article
Figure 10: Emotional Dynamics (Xuanbei He, Zixi Li, Shan Lu), The Bartlett School of Architecture, B-Pro MArch UD, Research Cluster 15 2020-21 (Tutors: Annarita Papeschi, Alican Inal, Ilaria Di Carlo, Vincent Novak).
Figure 10: Emotional Dynamics (Xuanbei He, Zixi Li, Shan Lu), The Bartlett School of Architecture, B-Pro MArch UD, Research Cluster 15 2020-21 (Tutors: Annarita Papeschi, Alican Inal, Ilaria Di Carlo, Vincent Novak).
Towards a Pervasive Affectual Urbanism
Aesthetics, Affect Theory, Automated Cognition, Collective Authorship, Ecosophy
Ilaria Di Carlo, Annarita Papeschi

ilaria.dicarlo@ucl.ac.uk
Add to Issue
Read Article: 3819 Words

Interspecies Encounters and Performative Assemblages of Contamination

Our inner mental ecology has been known to be fundamental for the meaningful and complete success of the notion of ecology.[1] Further demonstrated by the neurosciences, we have now assimilated the notion that we first empathise emotionally and physiologically with what surrounds us in a precognitive phase and only at a later time do we understand consciously the source of our aesthetic experience and, cognitively, its reason and meaning.[2]

In order to investigate the concept of digital and material contaminations as a new way to conceptualise democratic design processes as modes of appropriation and negotiation of space, we have chosen to venture into the epistemological ecotone between aesthetics and cognition, examined through the concept of affect. It is within affects, in fact, that creativity emerges through perception and a cognitive approach to change and social action, “bridging aesthetics and political domain” through a series of encounters between different ecologies and their becoming.[3]

What the affect theory speculates is that our “life potential comes from the way we can connect with others”, from our connectedness and its intensity, to the point that the ability itself to connect with others could be out of our direct control.[4] It is a question of affective attunement, an emergent experience that becomes proto-political,[5] and as any experience that works through instantaneous assessments of affect it becomes also strongly connected with notions of aesthetics and cognition.[6] The paper examines how both aesthetics and cognition could be the instantiators of a change of paradigm within affectual and post-humanist approaches to the design of our cities and territories.

Figure 1 – “Ecognosis” (Kehan Cheng, Divya Patel, Hui Tan), The Bartlett School of Architecture, B-Pro MArch UD, Research Cluster 15 2020-21 (Tutors: Annarita Papeschi, Alican Inal, Ilaria Di Carlo, Vincent Novak).

THE DIMENSIONS OF POST-HUMANIST AESTHETICS

Aesthetics can be defined according to its field of reference in slightly different ways: in neuroscience, aesthetics is the neural manifestation of a process articulated into sensations, meaning and emotions;[7] in evolutionary biology, aesthetics is an adaptive system to environmental stimuli;[8] in an ecological discourse, aesthetics is capacity to respond to the patterns which connect;[9] in philosophy and specifically in the context of Object-Oriented Ontology, aesthetics is the root of all philosophy.[10] Above all, regardless of the framework of reference, aesthetics fundamentally represents a form of knowledge, and as such, it is a very powerful and uncanny conceptual device.

The choice to connect the topic of ecology with aesthetics is not only related to the idea that aesthetics is primarily a form of knowledge and because “any ecologic discourse must be aesthetic as well as ethical in order to be meaningful”,[11] but also because aesthetics has the power to attract affects and to convey difficult or ambiguous concepts, like those feelings of ambivalence that often come along with the ecological debate. As Morton states, the aesthetic experience “provides a model for the kind of coexistence ecological ethics and politics wants to achieve between humans and nonhumans […] as if you could somehow feel that un-feelability, in the aesthetic experience”.[12] As a form of semiotic and experiential information exchange, the aesthetic experience is our primary source of genuine human understanding.

Neuroscientist Damasio demonstrates through a compelling series of scientific studies how emotions are essential to rational thinking and social behaviour.[13] In addition, the embodied simulation theory teaches us that in a precognitive phase we first empathise emotionally and physiologically with what surrounds us and only at a later stage understand consciously the source of our aesthetic experience and, cognitively, its reason and meaning.

“Our capacity to understand the others and what the others materially accomplish does not depend exclusively on theoretical-linguistic competences, but it strongly depends on our socio-relational nature, of which corporeity constitutes the deepest and not further reducible structure. … In this sense, the aesthetic experience is a process on multiple levels which exceeds a purely visual analysis and leans on the visceral-motor and somatomotor resonation of whoever experiences it.”[14]

In other words, the theory speculates that the same neural structures involved in our bodily experiences, our sensing, contribute to the conceptualisation of what we observe in the world around us.

Aesthetics, however, is no competence nor ability nor property exclusive to human nature, it only depends on the different sensing apparatus of each agency – or on what the proto-ecologist von Uexküll defined as the Umwelt, a specific model of the world corresponding to a given creature’s sensorium.[15] Being aware of this aesthetic “perceptual reciprocity”,[16] of this condition of mutual affects towards the environment, opens up new perspectives of solidarities where multiple agencies, each one living through multiple temporalities and with their own “way of worlding”,[17] participate in the remaking of the planet through their patterns of growth and reproduction, their polyarchic assemblages, their territories of action and their landscapes of affects. In fact, we need to acknowledge that the environment is constituted by an ecology of different forms of intelligence where humans are just one form of biochemical intensity.[18]

This expanded notion of agency is further enriched by Bennett’s vital materialism, which by ascribing to non-living systems their own trajectories and potentials, defines a multidimensional gradient that includes not only human and biological intelligences, but the natural and the artificial, raw matter and machinic intelligence, revealing opportunities of intersection, contamination, and collaboration.[19] Her thought is about the need to recognise the vital agency of matter “as the alien quality of our own flesh”,[20] and a part of that “Parliament of Things” or “Vascularised Collective” mentioned by Latour in his Actor Network Theory.[21]

This radical understanding of agency as a confederation of human and nonhuman elements, biological and artificial entities, leads to some critical questions regarding equality, accountability and moral responsibility. As a form of rhizomatic Animism,[22] it aims to reclaim and honour the mesh of connections and “assemblages that generate metamorphic transformation in the capacity to affect and be affected – and also to feel, think, and imagine”. And it is this capacity to affect and be affected that once again emerges as the effectual and necessary catalyst for creation and change, as affects are implicated in all modes of experience as a dimension of becoming. They are located in a non-conscious “zone of indistinction” between action and thought, and they fully participate in cognitive processes.[23]

This is a pervasive process that affects all scales of being singular and choral, from the mesoscale of large planetary processes down to the nano-mechanisms of molecular self-organisation, entailing a new worldly disposition towards the nature of being collective. And it’s precisely because of the trans-scalar and concurrent effects that this extended notion of agency produces while processing new interpretations and understandings of the world that, when considering its impact on ideas of the negotiation and democratisation of space, we should interrogate not only the larger mechanisms of collective sense and decision making, but the very processes of cognition, communication, and information exchange at its basis.

Figures 2–4 – “Civic Sensorium” (Songlun He, Dhruval Shah, Qirui Wang), The Bartlett School of Architecture, B-Pro MArch UD, Research Cluster 15 2020-21 (Tutors: Annarita Papeschi, Alican Inal, Ilaria Di Carlo, Vincent Novak).

PERFORMING THE MANY VOICES

In recent publications, Hayles describes the idea of a cognitive non-conscious as the possibility for complex systems to perform functions that “if performed by conscious entities would be unquestionably called cognitive”.[24] Drawing from artificial and biological examples, she further explores a series of complex, adaptive and intention-driven organisations that, performing within the domain of evolutionary dynamics, exhibit cognitive capacities operating at a level that is inaccessible to introspection. Within this context, when considering the relation between human cognition and the cognitive non-conscious, she explains, the human interpretation might enter algorithmic analysis at different stages, in a sort of dialogue that de-facto structures the potential outcomes of a hybrid cognitive process, where part of the interpretation might be outsourced to the cognitive non-conscious, in a process that intimately links the meaning of the information produced to the specific mechanisms and the context of the interpretation, opening multiple new opportunities for the interpretation of ambiguous information.[25]

Indeed, the argument about the potential and the perils of automation for decision-making is as relevant as it is controversial today. Parisi is significantly more critical regarding the current practices of human-machine collaboration, warning of the dangers of granular machine-generated content amplifying existing bias, or worse, being redirected for a purpose not pre-known. “Even if algorithms perform non-conscious intelligence, it does not mean that they act mindlessly”, she argues.[26] Building on Hayles’ argument, she further elaborates that while it is not possible to argue that cognition performed by non-conscious entities is coherent and able to link the past and the present in causal connection, it is possible for non-conscious cognition to expose “temporal lapses that are not immediately accessible to conscious human cognition”. This is a process that sees algorithms not just adapting passively to the data provided but establishing new patterns of meaning to form coevolutionary cognitive infrastructures that, based on the idea of indeterminacy as a model for automated and hybrid cognition, avoid the primary level of feedback based on prescriptive outcomes and incorporate parallelism of learning and processing.[27]

These arguments acquire a particular relevance if further considered in combination with the theory of information expressed by Simondon, which, formulated as an antagonist argument to Shannon’s cybernetic theory of communication, argues that information is never found, but is always expressed through a process of individuation of the system, as the result of the tensions between the realities that compose the system itself, as the very notion that clarifies the modes through which these realities might become a system in the first instance. This is a process that, by drawing on Simondon’s notion of individuation as the process of social becoming that leads to the formation of the collective subject – the transindividual – becomes inherently metastatic as it emerges from the tension between the sensorial abilities of the system and its tropism.[28]

As such, Simondon’s notion of transindividuality constitutes the basis for a radical reimagination of the process of becoming collective and building collective knowledge,[29] and through its intersection with the speculative opportunities inherent in ideas of tropistic material computation, it also offers the potential for an emergent rearticulation of collective sense and decision making, ultimately offering a protocol towards the exploration of the material, technological and aesthetic dimensions of new post-human and pervasive forms of authorship.

Attempting to account for the multidimensional consequences of altering the creative processes as a result of the construction of collective authorship as an inherently transindividual practice, the points made above imply a series of strategies oriented toward the definition of emergent meaning potentially able to capture the weaker voices and signals. This includes a focus on the diverse sensual and affectual experience of the participants, the orientation towards procedural indeterminacy and the exploration of material intelligence.

Furthermore, if we consider them in their intersection with our initial idea of the environment as constituted by an ecology of different forms of intelligence – where the creation of aesthetic assemblages of collaborative agencies is intended as the entangled construction of space, time and value through the symbiosis of different forms of intelligence defined by open-endedness and inclusiveness – these ideas describe a new urban paradigm, where the notion and aesthetic language of single human authorship with intellectual ownership is substituted by the concept of a collective of humans and non-human ecologies that might recover the aesthetics’ real, fundamental meaning, as an ecological category.

It is with the acceptance of these mixtures of interchanges and crossings of energies, that we can finally observe the old notion of quality, as an essential and pure identity related to cathartic categories, giving way to a more diffused and impure version of itself; a definition of quality not so much related to pureness, homogeneity, uniformity and refinement, but rather to a more complex meaning of sophistication by collaboration, contamination and exploitation of multiple resonances and superimpositions.[30]

As Lowenhaupt Tsing advocates, learning to look at multi-species worlds could lead to different types of production-based economies: “Purity is not an option if we want to pursue a meaningful, informed ecological discourse. We must acknowledge that contaminations are a form of liveable or bearable collaborations. ‘Survival’ requires liveable collaborations. Collaboration means working across differences which leads to contamination.”[31]

These domains and agencies searched for across other species, other ecological intensities and other modes of cognition, and reconfigured through computational technology, respond to a different kind of beauty, a filthy one, a revolutionary one, and an ecologic one. One that, as Morton preaches, “must be fringed with some kind of slight disgust … A world of seduction and repulsion rather than authority”.[32]

According to Guattari, such ecosophic aesthetic paradigms, these collective assemblages or abstract machines, working transversally on different levels of existence and collaboration, would organise a reinvention of social and ecological practices, offering opportunities for dialogues among different forms of ecological intensities.[33] They would also instantiate processes that would give back to humanity a sense of responsibility, not just towards the planet and its living beings, but also towards that immaterial component which constitutes consciousness and knowledge. Such a change of perspective in terms of critical agency would inevitably bring along a change in what Jacque Rancière calls the distribution of the sensible – where sensible is understood as “perceptible or appreciable by the senses or by the mind”, in a definition that describes new forms of inclusion and exclusion of the human and non-human collectivity in the process of appropriation of reality.[34] And since access to a different distribution of the sensible is “the political instrument par excellence against monopoly”,[35] we should treasure it for its capacity to allow us, borrowing Thomas Saraceno’s words, “to tune in to the non-human voices that join ours in boundless connectivity canvases, … proposing the rhizomatic web of life, which highlights hybridisms between one species and another and between species and worlds”.[36] This is a process that describes new trajectories for new forms of institutions where we shall consider not just individual democracy, but a democracy extended to other species, talking to us through the language of the machines.

Figures 5–7 – “Ecognosis” (Kehan Cheng, Divya Patel, Hui Tan), The Bartlett School of Architecture, B-Pro MArch UD, Research Cluster 15 2020-21 (Tutors: Annarita Papeschi, Alican Inal, Ilaria Di Carlo, Vincent Novak).

TOWARDS CO-CREATIVE AFFECTUAL PRACTICES

Along these trajectories, when approaching world and space-making strategies, design processes are translated into an “entangled” construction of space, time, value, and resources, which are critically defined by the very processes of their formation. In such a perspective, artificial intelligence has the potential to become the enabler, the instantiator of a new wider democratic process potentially able to disrupt existing power structures, giving a voice to what currently has none: all the non-conscious agencies separate from humankind or its direct will.

This is a new form of authorship which translates the question to the final user, so that the inquiry is not so much what the user wants from the environment but what can the user do for the environment, an idea that reverts the role of the final user from consumer to service provider. Such a form of authorship takes place in a symbiosis of computational and non-computational forms of thinking, as a hybrid of the diverse modes of cognition, resulting in a new type of synthetic ecology: the one that the designer enables.

In such a context, digital design platforms work as co-evolutionary cognitive infrastructures dealing with an amalgamation of different types of resource thinking: the thinking coming from the machines, the thinking coming from human participants, and the one converging from other ecological intensities. This is a type of transindividual subjectivity, that, formed as an ecology of diverse forms of cognition, is choral, decentralised, and inclusive, and has the capacity of being able to transmit tacit or informed knowledge exposing new models of democratic collective decision- and sense-making. In this process, all the participating forms of cognition have the potential to learn from each other and to compose unexpected dialogues and collective knowledge – what we call “interfaces [i/f], physical/virtual devices, a platform, enabling communications among entities of different kinds each one with its own protocol of communication, knowledge, and values”.[37] This is an approach to collective creation that, drawing on alternative ideas of communication and power between the participating agencies, maps the emergence and evolution of patterns of informed feedback, outlining the connections with ideas of learning and performative collaboration between human, synthetic and biological agencies. In the exploration of these new forms of authorship, designers face the challenge of orchestrating a process able to build fruitful associations between machine computation, genuine human understanding, and non-conscious cognitive agencies – a challenge that should be taken as an opportunity to construct open processes of self-reflection and learning.

The resulting Transindividualities, which are digital participatory scholarships to ecological and post-humanist theory, create the potential for the affirmation of novel mediated narratives,[38] which, by challenging the responsibility of authorship, bring along a new definition of the Human and the need to reframe the question of the design of our cities and territories towards a Pervasive Affectual Urbanism, which points toward the urge of new ethos and new aesthetics.

The challenge will be perhaps best approached by objecting to the idea that the designer is exclusively and ultimately responsible for the design process, and by sustaining the hypothesis that the symbiosis between all the different types of ecologies inhabiting the space could welcome all sorts of different agents through a creative process that embraces indeterminacy. It will be about the belief that open-endedness, contamination, interaction, machine learning and genuine human understanding are not so much about consensus, but about layering and celebrating differences to best use all of them as resources toward the participatory project of space-making. It will be about praising quality as sophistication, by acceptance, negotiation, exploitation and rhizomatic contaminations of multiple resonances and superimpositions, where the value of the project will lie in the exchange of information which is not merely exchanged, but used to create again.

Figures 8–10 – “Emotional Dynamics” (Xuanbei He, Zixi Li, Shan Lu), The Bartlett School of Architecture, B-Pro MArch UD, Research Cluster 15 2020-21 (Tutors: Annarita Papeschi, Alican Inal, Ilaria Di Carlo, Vincent Novak).

References

[1] F. Guattari, The Three Ecologies (London: The Athlone Press, 1987).

[2] A. Damasio, Descartes’ Error: Emotion, Reason, and the Human Brain (London: Putnam Pub Group, 1994).

V. Gallese, “Embodied Simulation: from Neurons to phenomenal experience”, in Phenomenology and the Cognitive Sciences 4 (Berlin: Springer, 2005), 23–48.

[3] B. Massumi, The Politics of affect (Cambridge: Polity Press, 2005).

[4] Ibid.

[5] E. Manning, interviewed in B. Massumi, The Politics of affect (Cambridge: Polity Press, 2005), 135.

[6] B. Massumi, The Politics of affect (Cambridge: Polity Press, 2005).

[7] A. Chatterjee, The Aesthetic Brain: How We Evolved to Desire Beauty and Enjoy Art (Oxford: Oxford University Press, 2015).

[8] G. H. Orians, “An Ecological and Evolutionary Approach to Landscape Aesthetics”, in E. C. Penning-Rowsell, D. Lowenthal (Eds.), Landscape Meanings and Values (London: Allen and Unwin) 3–25.

[9] G. Bateson, Steps toward an ecology of mind (London: Wildwood house Limited, 1979).

[10] G. Harman, “Aesthetics as a First Philosophy: Levinas and the non-human”, Naked Punch 2012, http://www.nakedpunch.com/articles/147, accessed 3 Feb. 2020.

[11] F. Guattari, The Three Ecologies (London: The Athlone Press, 1987).

[12] T. Morton, All Art is Ecological (Milton Keynes: Penguin Books, Green Ideas, 2021).

[13] A. Damasio, Descartes’ Error: Emotion, Reason, and the Human Brain (London: Putnam Pub Group, 1994).

[14] V. Gallese, “Embodied Simulation: from Neurons to phenomenal experience”, in Phenomenology and the Cognitive Sciences 4 (Berlin: Springer, 2005), 23–48.

[15] J. Von Uexkull, A Foray into the Worlds of Animals and Humans (Minneapolis: University of Minnesota Press, 2010).

[16] D. Abram, The spell of the sensuous, Perception and language in a more-than-human world (New York: Vintage Books, 1997).

[17] B. Latour, Down to Earth. Politics in the New Climatic Regime (Cambridge, PolityPress, 2018).

[18] I. Di Carlo, “The Aesthetics of Sustainability. Systemic thinking and self-organization in the evolution of cities”, 2016, PhD thesis, University of Trento, IAAC, Barcelona, Spain.

[19] J. Bennett, Vibrant Matter. A political ecology of things (Durham N.C. and London: Duke University Press, 2010).

[20] Ibid.

[21] B. Latour, We have never been modern (Cambridge: Harvard University Press, 1993).

[22] I. Stengers, “Reclaiming Animism”,  e-flux, 2012,  https://www.eflux.com/journal/36/61245/reclaiming-animism/, accessed 10 Oct. 2021.

[23] B. Massumi, Ontopower: War, Power, and the State of Perception (Durham N.C.: Duke University Press, 2015).

[24] K. N. Hayles, “Cognition Everywhere: The Rise of the Cognitive Non-conscious and the Costs of Consciousness”, New Literary History 45, 2, 2014.

[25] Ibid.

[26] L. Parisi, “Reprogramming Decisionism”, e-flux, 2017, https://www.e-flux.com/journal/85/155472/reprogramming-decisionism.

[27] Ibid.

[28] G. Simondon, L’individuazione psichica e collettiva, ed. and transl. P. Virno, (Rome: DeriveApprodi, 2001).

[29] A. Papeschi, “Transindividual Urbanism: Novel territories of digital participatory practice”, Proceedings from Space and Digital reality: Ideas, representations/applications and fabrications, 2019, 80-90.

[30] I. Di Carlo, “The Aesthetics of Sustainability. Systemic thinking and self-organization in the evolution of cities”, 2016, PhD thesis, University of Trento, IAAC, Barcelona, Spain.

[31] A. Lowenhaupt Tsing, The mushroom at the end of the world: on the possibility of life in Capitalist Ruins (New Jersey: Princeton Univ. Press, 2017).

[32] T. Morton, All Art is Ecological (Milton Keynes: Penguin Books, Green Ideas, 2021).

[33] F. Guattari, Chaosmosis. An ethico-aesthetic paradigm (Sydney: Power Publications, 1995).

[34] J. Ranciere, The Politics of Aesthetics: Politics and Aesthetics (New York: Continuum, 2014).

[35] Ibid.

[36] T. Saraceno, “Aria”, Catalogue of the exhibition at Palazzo Strozzi Firenze (Venezia: Edizioni Marsilio, 2020).

[37] I. Di Carlo, “The Aesthetics of Sustainability. Systemic thinking and self-organization in the evolution of cities”, 2016, PhD thesis, University of Trento, IAAC, Barcelona, Spain.

[38] A. Papeschi, “Transindividual Urbanism: Novel territories of digital participatory practice”, Proceedings from Space and Digital reality: Ideas, representations/applications and fabrications, 2019, 80-90.

Suggest a Tag for this Article
Prospectives Writing Style Guide
24/05/2022
author guidelines, punctuation, referencing, spelling, style guide, writing style
Provides Ng

provides.ng.19@ucl.ac.uk
Add to Issue
Read Article: 85 Words

The purpose of this guide is to help authors ensure consistency with Prospectives Issues.
It includes the most contentious areas of spelling, punctuation and formatting. For more
general guidance on tone and style, please consult the UCL Author Guidelines and Content
Style Guide. Where this guide differs from UCL Author Guidelines or Content Style Guide, please
use this document.
If helpful, you can also consult Issue 1 of Prospectives: https://journal.b-pro.org/issue/issue1

Suggest a Tag for this Article
Retrofit Project by Frederik Vandyck, Design Sciences Hub
Retrofit Project by Frederik Vandyck, Design Sciences Hub
Towards the computation of architectural liberty  
29/04/2022
architectural liberty, automation, computation, design theory, fragmentation
Sven Verbruggen, Elien Vissers-Similon

sven.verbruggen@uantwerpen.be
Add to Issue
Read Article: 2620 Words

A design process consists of a conventionalised practice – a process of (personal) habits that have proven to be successful – combined with a quest for creative and innovative actions. As tasks within the field of architecture and urban design become more complex, professionals tend to specialise in one of many subsets, such as designing, modelling, engineering, managing, construction, etc. They use digital tools which are developed for these specialised tasks only. Therefore, paradoxically, automation and new algorithms in architecture and urbanism are primarily oriented to simplify tasks within subsets, rather than engaging with the complex challenges the field is facing. This fragmented landscape of digital technologies, together with the lack of proper data, hinders professionals’ and developers’ ability to investigate the full digital potential for architecture and urban design. [1] Today, while designers explore the aid that digital technologies can provide, it is mostly the conventionalised part of practice that is being automated to achieve a more efficient workflow. This position statement argues for a different approach: to overcome fragmentation and discuss the preconditions for truly coping with complexity in design – which is not a visual complexity, nor a complexity of form, but rather a complexity of intentions, performance and engagement, constituted in a large set of parameters. We will substantiate our statement with experience in practice, reflecting on the Retrofit Project: our goal to develop a smart tool that supports the design of energy neutral districts. [2]  

So, can designers break free from the established fragmentation and compute more than technical rationale, regulations and socio-economic constraints? Can they also incorporate intentions of aesthetics, representation, culture and critical intelligence into an architectural algorithm? To do so, the focus of digital tools should shift from efficiency to good architecture. And to compute good architecture, there is a need to codify a designer’s evaluation system: a prescriptive method to navigate a design process by giving value to every design decision. This evaluation system ought to incorporate architectural liberty – and therein lies the biggest challenge: differentiating between where to apply conventionalised design decisions and where (and how) to be creative or inventive. Within a 5000-year-old profession, the permitted liberty for these creative acts has been defined elastically: while some treatises allow only a minimum of liberty for a designing architect, others will lean towards a maximum form of liberty to guarantee good architecture. [3]  

A minor group of early adopters, such as Greg Lynn, Zaha Hadid Architects, and UN Studio, tried to tackle the field’s complexity using upcoming digital technologies, in the late ’90s early 2000s. They conveniently inferred their new style or signature architecture from these computational techniques. This inference, however, causes an instant divide between existing design currents and these avant-garde styles. The latter claim the notion of complexity – the justification for their computational techniques – lies mostly within the subset of form-giving, not covering the complexity of the field. This stylistic path is visible in, for example, Zaha Hadid Architects’ 2006 masterplan for Kartal-Pendik in Istanbul. The design thrives on binary decisions in 3D-modelling tool Maya, where it plays out a maximum of two parameters at once: the building block with inner court and the tower. The resulting plastic urban mesh looks novel and stylistically intriguing, yet produces no real urbanity and contains no intelligence on the level of the building type. This methodology does not generate knowledge on how well the proposed urban quarter (or constituent buildings) will perform on the level of, for example, costs, energy production and consumption, infrastructure, city utilities, diversity and health. The fluid mass still needs all conventional design operations to effectively turn it into a mixture of types, urban functions, and local identity. Arguably, the early adopters’ stylistic path avoided dealing with real complexity and remained close to simple automation. In doing so, while they promoted a digital turn, they might also have dug the foundations for today’s fragmentation in the field.  

Ironically, to some extent Schumacher’s treatise – definitely the parts that promote parametricism as a style – reads as a cover-up of the shortcomings of parametric software; for example, the inability to produce local diversity and typological characteristics beyond formal plasticity. [4] Schumacher further rejects Vitruvius to prevent structural rationale from taking primacy, and he disavows composition, harmony and proportion as outdated variable communication structures to propose the “fluid space” as the new norm. [5] This only makes sense knowing that the alternative – a higher intelligence across the whole field of architecture and urban planning, such as codified data and machine learning algorithms – did not yet exist for the early adopters. Contemporary applications such as Delve or Hypar do make use of those intelligent algorithms, yet prioritise technical and economical parameters (e.g. daylight, density, costs) to market efficiency. [6]  

Any endeavour to overcome the established fragmentation and simplified automation will ultimately find itself struggling with the question of what good architecture is. After all, even with large computational power at hand, the question remains: how to evaluate design decisions beyond the merely personal or functional, in a time where no unified design theory exists? In fact, the fragmented specialisation of today’s professionals has popularised the proclamation of efficiency. As a result, an efficiency driver (whether geared by controlling costs, management or resources) is often disguised as moral behaviour, as if its interest is good architecture first, and the profit and needs of beneficiaries only come second. If the added value of good architecture cannot be defined, the efficiency driver will continue to get the upper hand, eroding the architectural profession to an engineering and construction service providing calculations, permits and execution drawings.  

It was inspiring to encounter Alessandro Bava’s Computational Tendencies on this matter:  

The definition of what constitutes “good” architecture is, in fact, always at the center of architecture discourse, despite never finding a definite answer. Discourses around digital architecture have too often resolved the question of the “good” in architecture by escaping into the realm of taste or artistic judgment. [7] 

Bava renders Serlio’s architectural treatise as an original evaluating system that attributes universal value, and revisits Rossi’s exalted rationalism to propose a merger of architecture’s scientific qualities with its artistic qualities. He aims to re-establish architecture’s habitat-forming abilities and prevent architecture from becoming an amalgam of reduced and fragmented services. However, Serlio’s treatise did not provide a fully codified and closed formal system, as it still includes the liberty of the architect. [8] Going through Serlio’s On Domestic Architecture, an emphasis is placed on ideal building types, mostly without context. Therefore, no consideration is given to how these types ought to be modified when they need to be fitted in less ideal configurations such as non-orthogonal grids. The books also remain ignorant of the exceptions: the corner-piece-type, or fitting-parts that mediate between buildings and squares on a higher level. This is not a cheap critique of Serlio’s work. It is an awareness one needs to have when revisiting Serlio’s work as a “proto-BIM system, one whose core values are not market availability or construction efficiency, but harmonic proportions”. [9] Arguably, it is the liberty, the modifications, and the exceptions that need to be codified, to reach beyond simplified automation, across fragmentation, and towards an architectural algorithm to assist designers. 

This is easier said than done, otherwise the market would be flooded with design technologies by now. As with most design problems, the only way to solve them is by tackling them in practice. In 2021, the Design Sciences Hub, affiliated with the University of Antwerp, set up the Retrofit Project. The aim is to develop an application to test the feasibility of district developments. The solution will show an urban plan with an automatically generated function mix and optimized energetic and ecological footprint, for any given site and context. The project team collaborates with machine-learning experts and environmental engineers for the necessary interdisciplinary execution. Retrofit is currently in the proof-of-concept phase, which focuses on energy neutrality and will tackle urban health and carbon neutrality in the long run. 

The problem of modifications and exceptions seems the easiest to examine, as it primarily translates into a challenge of computational power and coping with a multitude of parameters. However, these algorithms should be smart enough to select a specific range within the necessary modifications and exceptions to comply with the design task at hand. In this case, the algorithm should select the correct modifications and exceptions needed to integrate certain types into any given site within the Retrofit application. In other words, there is a need for an intelligent algorithm that can be fed a large number of types as input data to generate entirely new or appropriate building types. The catch resides within the word “intelligent”, as algorithms aren’t created intelligent, they are trained to reach a certain level of intelligence based on (1) codifiable theory and (2) relevant training sets of data. Inquiring into a variety of evaluation systems for architectural design that emerged over the last 40 years, Verbruggen revealed the impossibility of creating a closed theoretical framework, and uniquely relating this framework to a conventionalised evaluation system in practice. [10] As such, both the codifiable theory – a unified evaluation system that integrates scientific and artistic qualities into one set of rules – and the training set  hardly exist in architecture and urban design. To complicate matters even more, today’s non-unification is itself often embraced as the precondition for good architecture [11-15] 

And so, the liberty question emerges here once again: how can different types, their modifications and exceptions, including respective relationships with different contexts, be codified? It is easy to talk about codification, but much harder to implement it within a project. When different types are inserted into a database, how are the attributes defined? This is a task that proved to be very laborious and raised many new questions in the Retrofit project. Attributes will include shape and size, yet might also include levels of privacy, preferred material usage, degree of openness, average energetic performance, historic and social acceptance in specific areas, compatibility with different functions, etc. Which values define when and where a specific type is appropriate, and how are they weighed? Do architects alone fill up the database, and if so, which architect is qualified, and why? And when an AI application would examine existing typologies within our built environment, which of these examples should be considered good, and why? Can big data or IoT sensors help in data gathering? To truly take everything into account, how much data do we really need (e.g., a structure’s age and condition, social importance, usage, materials, history, etc.). Furthermore, when the Retrofit application runs on an artificially intelligent algorithm that is trained to think beyond the capabilities of a single architect, will the results diverge (too) much from what society is used to? 

The many practical questions from the Retrofit Project show that defining the architect’s liberty is both the problem and holds the potential for digital technologies to tackle the true complexity of the field. Liberty is undeniably linked to the design process and, therefore, encoding a design process needs to (1) capture the architect’s evaluation system and (2) allow for targeted and smart data gathering. The evaluation system can then be coded into an algorithm, with the help of machine learning experts, and trained using the gathered data. Both the evaluation system and the necessary data rely heavily on the architect’s liberty. Because dealing with these liberties is a difficult task – perhaps the most difficult task in the age of digital architecture – many contemporary businesses and start-ups that claim to revolutionise the design process with innovative technologies might not revolutionise anything, because they opt for the easy route and avoid dealing with the liberty aspect. An architectural algorithm that does take the liberty aspect into account may provide designers with an artificial assistant to help tackle all complexities in the field while tapping into the full potential of today’s available computational power. 

This could be the ultimate task we set ourselves at the DSH. Studying a large dataset of design processes, steps, and creative acts might reveal codifiable patterns that could be integrated into a unified and conventionalized evaluation system. This study would target large and diverse groups of designers and users in general, including their knowledge exchange with other involved professionals. Could such an integral evaluation system, combined with data gathering, finally offer the prospect of developing a truly architectural algorithm? Eventually, this too will encounter issues that require further study, such as deciding who to involve and how to wisely navigate between the highs and lows of the wisdom of crowds: [16] can we still trust the emerging patterns detected by machine learning algorithms to constitute proper architectural liberty and, thus, good architecture? We will proceed vigilantly, but we must explore this path to avoid further fragmentation, non-crucial automation, and the propagation of false complexity. 

References

[1] N. Leach, Architecture in the Age of Artificial Intelligence: An Introduction for Architects (London; New York: Bloomsbury Visual Arts, 2021).

[2] The Design Sciences Hub [DSH] is a valorisation team of the Antwerp Valorisation Office. The DSH works closely with IDLab Antwerp for Machine Learning components and with the UAntwerp research group Energy and Materials in Infrastructure and Buildings [EMIB] to study energy neutrality within the Retrofit Project. Although the project will be led and executed by the University of Antwerp, the private industry is involved as well. Four real estate partners – Bopro, Immogra, Quares and Vooruitzicht – are financing and steering this project. So is the Beacon, maximizing the insights from digital technology companies. Also see: https://www.uantwerpen.be/en/projects/project-design-sciences-hub/projects/retrofit/

[3] H.W. Kruft, A History of Architectural Theory: from Vitruvius to the present (London; New York: Zwemmer Princeton Architectural Press, 1994).

[4] P. Schumacher, The Autopoiesis of Architecture: A New Framework for Architecture. Vol. 1 (Chichester: John Wiley & Sons Ltd, 2011). P. Schumacher, The Autopoiesis of Architecture: A New Agenda for Architecture. Vol. 2 (Chichester: John Wiley & Sons Ltd, 2012).

[5] Ibid.

[6] Delve is a product of Sidewalk Labs, founded as Google’s urban innovation lab, becoming an Alphabet company in 2016. Hypar is a building generator application started by former Autodesk and Happold engineer Ian Keough. Also see www.hypar.io, www.sidewalklabs.com/delve.

[7] A. Bava, “Computational Tendencies”, In N. Axel, T. Geisler, N. Hirsch, & A. L. Rezende (Eds.), Exhibition catalogue of the 26th Biennial of Design Ljubljana. Slovenia (2020): e-flux Architecture and BIO26| Common Knowledge.

[8] H.W. Kruft, A History of Architectural Theory: from Vitruvius to the present (London; New York: Zwemmer Princeton Architectural Press, 1994).

[9] A. Bava, “Computational Tendencies”, In N. Axel, T. Geisler, N. Hirsch, & A. L. Rezende (Eds.), Exhibition catalogue of the 26th Biennial of Design Ljubljana. Slovenia (2020): e-flux Architecture and BIO26| Common Knowledge.

[10] S. Verbruggen, The Critical Residue: Creativity and Order in Architectural Design Theories 1972-2012 (2017).

[11] M. Gausa & S. Cros, Operative optimism (Barcelona: Actar, 2005)

[12] W. S. Saunders, The new architectural pragmatism: a Harvard design magazine reader. (Minneapolis: University of Minnesota Press, 2007).

[13] R. Somol & S. Whiting, Notes around the Doppler Effect and Other Moods of Modernism. (2002) In K. Sykes (Ed.), Constructing a New Agenda: Architectural Theory 1993-2009 (1st ed., pp. 188-203). (New York: Princeton Architectural Press).

[14] K. Sykes, Constructing a new agenda : architectural theory 1993-2009.  (1st ed., New York: Princeton Architectural Press, 2010).

[15] S. Whiting, (recorded in Delft, march 2006). The Projective, Judgment and Legibility: Lecture at the Projective Landscape Conference, organized by the TU Delft and the Stylos foundation.

[16] P. Mavrodiev & F. Schweitzer “Enhanced or distorted wisdom of crowds? An agent-based model of opinion formation under social influence”, Swarm Intelligence, 15(1-2), 31-46. doi:10.1007/s11721-021-00189-3 J. Surowiecki, The wisdom of crowds : why the many are smarter than the few. (London: Abacus, 2005).

Suggest a Tag for this Article
Fig. 2 Norman Foster’s sketch illustrates the generative process: each floor is rotated by 5° relative to the one below around the central core with the pillars, bearing the vertical loads, the services, the stairs, and the lifts. From the core, six ‘spokes’ host the floorspace at each level. Each floorspace is detached from the next by a void triangular area about 20° wide. The vertically open areas create light wells for the height of the tower, up to the thirty-second floor. These open areas wound in coils to flow ventilation and natural lighting inside the building.
Fig. 2 Norman Foster’s sketch illustrates the generative process: each floor is rotated by 5° relative to the one below around the central core with the pillars, bearing the vertical loads, the services, the stairs, and the lifts. From the core, six ‘spokes’ host the floorspace at each level. Each floorspace is detached from the next by a void triangular area about 20° wide. The vertically open areas create light wells for the height of the tower, up to the thirty-second floor. These open areas wound in coils to flow ventilation and natural lighting inside the building.
The Architect and the Digital: Are We Entering an Era of Computational Empiricism? 
architectural design theory and practice, case study/studies, design education, design methods, digital design, parametric design
giovanni corbellini, Luca Caneparo

giovanni.corbellini@polito.it
Add to Issue
Read Article: 3887 Words

The close integration of design with computational methods is not just transforming the relationships between architecture and engineering; it also contributes to reshaping modes of knowledge development. This paper critically probes some issues related to this paradigm shift and its consequences on architectural practice and self-awareness, looking at the potential of typical teaching approaches facing the digital revolution. The authors, who teach an architectural design studio together, coming from different backgrounds and research fields, probe the topic according to their respective vantage points. 

Over the last few decades, a design agency has developed of using digital tools for the interactive generation of solutions by dynamically linking analytic and/or synthetic techniques. 

The analytic techniques make use of simulation, of the capability to forecast certain aspects of building performance. While in conventional practice simulation usually plays a consulting role in the later stages of the design process, in the new forms of agency it works as a generative device from the earlier phases. 

The synthetic techniques address, on the other hand, more organic, para-biologic concepts – for instance “emergence, self-organization and form-finding” – looking for “benefits derived from redundancy and differentiation and the capability to sustain multiple simultaneous functions”. [1] 

Structures and their conception stand out as a part of architectural design where the digital impact shows its clearest consequences. Candela, Eiffel, Nervi and Torroja considered for instance that calculations have to go in parallel with intuitive understanding of the form: “The calculation of stresses”, writes Torroja, “can only serve to check and to correct the sizes of the structural members as conceived and proposed by the intuition of the designer”. [2] “In this fundamental phase of design”, Nervi adds, “the complex formulas and calculation methods of higher mathematics do not serve. What are essential, however, are rough evaluations based on simplified formulas, or the ability to break down a complex system into several elementary ones”. [3] At the time, the computational aspects were overridingly cumbersome; Frontón Recoletos required from Torroja one hundred and fifty-eight pages of calculations with approximate methods. Classical analytical procedures provided limited tools for simulation: “It was mandatory for the engineer to supplement his analyses with a great deal of judgment and intuition accumulated over years of experience. Empiricism played a great role in engineering design; while some general theories of mechanical behaviour were available, methods for applying them were still under development, and it was necessary to fall back upon approximation schemes and data taken from numerous tests and experiments”. [4] 

After the epoch of Nervi and Torroja, research and practice have been deeply influenced by the combined actions of computation toward a unifying approach to the different theories in mechanics, thanks to exponential performance improvements in the hardware, as well as achievements in symbolic and matrix languages, and discretization methods (e.g., boundary and finite elements methods) implemented in software. At present, the wide availability of computational methods and tools can produce numerical simulations out of complex forms, with the expectation of providing a certain degree of knowledge and understanding of mechanics, energetics, fluids, and acoustics. The compelling possibilities of boundary or finite element methods, plus finite difference or volume methods, has produced a shift from science of construction pioneers’ awareness that not everything can be built, [5] to the “unprecedented morphology freedom” of the present. [6] Therefore, “We are limited in what we can build by what we are able to communicate. Many of the problems we now face”, as Hugh Whitehead of Foster and Partners points out, “are problems of language rather than technology. The experience of Swiss Re established successful procedures for communicating design through a geometry method statement”. [7] 

 “Parametric modelling”, Foster and Partners stated, “had a fundamental role in the design of the tower. The parametric 3D computer modelling process works like a conventional numerical spreadsheet. By storing the relationships between the various features of the design and treating these relationships like mathematical equations, it allows any element of the model to be changed and automatically regenerates the model in much the same way that a spreadsheet automatically recalculates any numerical changes. As such, the parametric model becomes a ‘living’ model – one that is constantly responsive to change – offering a degree of design flexibility not previously available. The same technology also allows curved surfaces to be ‘rationalized’ into flat panels, demystifying the structure and building components of highly complex geometric forms so they can be built economically and efficiently”. [8] 

Of course, communication is here understood within a very specific part of the design process, mainly connected with fabrication issues and their optimisation, but it is a concept that involves many layered levels of meaning. [9] Curiously, this shift from the physical to the immaterial reminds us of the same step made by Leon Battista Alberti, who conceived design as a purely intellectual construct and was obsessed by its transmission from idea to built form without information decay. [10] Digital innovation promises to better connect the engineering process (focus on the object) with the wider reality (the architectural perspective), enabling design teams to deal with increasingly complex sets of variables. Freedom comes, however, with the disruption of the design toolbox, usually more defined by constraints than capabilities, so that the resulting wild fluctuations of effects seem increasingly disconnected from any cause. Design choices are therefore looking for multifaceted narrative support – and the “Gherkin”, with its combination of neo-functional-sustainable storytelling and metaphorical shape, turns out to be emblematic from this point of view too. [11] 

Furthermore, extensive numerical simulations raise a question as to what extent they prove reliable, both because of their intrinsic functionality and the “black box” effect connected to the algorithmic devices. Those latter, especially in the latest applications of artificial intelligence such as neural networks, produce results through processes that remain obscure even to their designers, let alone less-aware users. Besides, the coupling of simulation with generative modelling through interactivity may not assist the designer in developing the understanding that, in several cases, (small) changes in the (coded) hypotheses can produce radically different solutions. Thus, the time spent in simulating alternatives can be more profitably spent working on different design hypotheses, and on architectural, technological and structural premises, perhaps with simpler computational models. 

Are we entering an era of computational empiricism, as some authors maintain? [12] 

Languages of innovation 

Generative modelling, morphogenesis, parametric tooling, computational and performative design… all these apparatuses have brought methodological innovation into closer integration among different disciplines, bridging the gaps between fields. Modelling the project, the main common aim of this effort, has from the beginning leaned on logics and mathematics as a shared lingua franca. [13] Since the 1960s, applied mathematics has extended its applications through the formalisation process of information technology, which has developed the tools and the models beneficial for the purposes of science and technology. Information and communication technology puts into effect “the standardisation and automation of mathematical methods (and, as such, a reversal of the relationship of domination between pure mathematics and applied mathematics and, more generally, between theory and engineering)”. [14] 

The redefinition of roles, between theories and techniques when applied to design, began in mathematics and physics with a metamorphosis of language, [15] with a shift towards symbolic languages that have gone beyond the mechanics of structures and the thermodynamics of buildings, subjecting it to automatic calculus, and finalising it in computation. [16] “Today, it is a widely held view that the advent of electronic computation has put an end to this semiempirical era of engineering technology: sophisticated mathematical models can now be constructed of some of the most complex physical phenomena and, given a sufficiently large computer budget, numerical results can be produced which are believed to give some indication of the response of the system under investigation”. [17] 

The straightforward capability to model and simulate projects, supported by the evidence of results, has given confidence in the emerging computational tools, highlighting the dualism between the desire to make these devices usable for a wide range of practitioners, in a variety of cases and contexts, and the exigency of grounding bases for deeper understanding within a reflective practice. Moreover, the very nature of using digital tools urges designers to face an increasing risk of becoming “alienated workers” who, in Marxian terms, neither own their means of production in actuality – software companies lease their products and protect them against unauthorised modifications – nor, above all, conceptually, since their complex machinery requires a specifically dedicated expertise. Therefore, within the many questions this paradigm shift is raising about the redefinition of theories and practices and their mutual relationship, a main concern regards educational content and approaches, in terms of their ability to provide useful knowledge to future practitioners and aid their impact on society. In the architectural design field – which traditionally crossbreeds arts, applied sciences, and humanities in order to fulfil a broad role of mediation between needs and desires – this means dealing with an already contradictory pedagogic landscape in which ideologically opposite approaches (namely method-oriented and case-oriented pedagogies) overlap. 

The specific of architectural design teaching does not escape this tension between methodological ambitions, nurtured by modern thinking and its quest for rationalisation, and the interplay between generations, languages and attitudes involved by learning through examples – even with its paradoxical side effects. One would expect in fact that a “positive” (according to Christopher Alexander), rule-based training should yield more open-ended outcomes than the “negative”, academic, disciplinary learning by copying. [18] But, on the one hand, the methodological approach implies an idea of linear control – towards optimisation and performance as well as in social and political terms – which reveals its origin in Enlightenment positivism. The Durandian apparatuses so widespread after World War II, with their proto-algorithmic design grammars, ended up accordingly with the reproduction of strict language genealogies. A similar trend seems to be emerging nowadays, in the convergence toward the same effective solutions in arts, sports, and whatever else, as a by-product of digital efficiency – which even the very technical camp is questioning. On the other hand, tinkering with the interpretation and application of examples makes possible the transmission of the many unspoken and unspeakable aspects connected to any learning endeavour. Getting closer to “good” examples – testing their potential according to specific situations – allows their inner quality to be grasped, reignited in different conditions, and finally transcended. Since forgetting requires something to be forgotten, Alexander is somehow right in framing this teaching attitude as “negative”: ironically, imitation provides the learning experience through which personal voices can emerge and thrive. 

Challenges ahead 

Turpin Bannister considered that in “an age of science”, architects “abandoned the scientific approach as detrimental to the pure art of design. On even the simplest question they acquiesced to their engineer and so-called experts”. [19] The pervasive penetration of computation in design would probably have met Bannister’s approval. The consequences and methodological implications are so far-reaching that they raise questions: how must education deal with the increased role of interactive computation in architectural design? And, more generally, with techno-science, its languages and methodologies? 

Architectural design still relies on a “political” attitude, and mediation between the “two cultures” [20] is a fundamental asset of its disciplinary approach. Even though the unity of knowledge has disappeared with the advent of modern science, as Alberto Pérez-Gómez stated, [21] we ideally aspire to become like renaissance polymaths, mastering state-of-the-art skills in the most disparate fields. But in the long time that separates us from Brunelleschi and Alberti, the amount of knowledge required by the different aspects of the practice, even those which are specifically architectural, has grown exponentially, and trying to get a minimum of mastery over it would demand a lifelong commitment and extraordinary personal qualities. Digital prostheses promise to close the gap between the desire for control over the many facets of the design process and the real possibility of achieving it. Some consequences of the augmented agency provided by new information and communication technologies are already evident in the overlapping occurring in the expanded field of the arts, with protagonists from different backgrounds – visual arts or cinema for instance – working as architects or curators and vice versa. [22] The power of the digital to virtually replace those “experts”, to whom, according to Turpin Bannister, architects outsource their own choices, seems to act therefore as an evolutionary agent against overspecialisation, confirming the advantage Bucky Fuller attributed to the architect as the last generalist. [23] 

However, without understanding and manipulating what happens within the black box of the algorithm, we still face the risk of being “designed” by the tools we put our trust in, going on to accept a subordinate position. Speaking machine, as John Maeda has pointed out, [24] is becoming necessary in order to contribute innovatively to any design endeavour. The well-known Asian American researcher, designer, artist and executive comes from a coding background, later supplemented with the study and practice of design and arts (along with business administration). His educational path and personal achievements indicate that such an integration of expertise is possible and desirable, even though his logical-mathematical grounding is likely the reason he mostly works with the immaterial, exploring media design and the so-called experience economy. Architectural schools are therefore facing the issue of if, when, and how to introduce coding skills into their already super-crammed syllabuses – from which, very often, visual arts, philosophy, law, storytelling and other much needed approaches and competencies are absent. One can argue that coding would provide young professionals with expertise they could immediately use in the job market, enabling them to better interact with contemporary work environments. On the other hand, a deeper perspective shows how the “resistance” of architectural specificity produced exceptional results in revolutionary times: academic education acted for the Modern masters as both a set of past, inconsistent practices to overcome and a background that enhanced the quality of their new language. 

Digitalisation looks like a further step along the process of the specialisation of knowledge, which unfolded hand-in-hand with the development of sciences, techniques, and their languages. Since the dawn of the modern age, architects have often tried to bring together a unified body of knowledge and methodology; first around descriptive geometry, and then around geometry as a specific discipline which “gives form” to mathematics, statics and mechanics. “Geometry is the means, created by ourselves, whereby we perceive the external world and express the world within us. Geometry is the foundation”, Le Corbusier writes in the very first page of his Urbanisme, trying to keep pace with modernisation and establishing a new urban planning approach according to its supposed “exactitude”. [25] But while hard sciences and their technical application rely on regularity of results in stable experimental conditions, architects are still supposed to give different answers to the same question – or, more precisely, to always reframe architectural problems, questioning them in different ways. 

Considering the volatility of the present situation, opening up and diversifying the educational offer seems a viable bet, more so than the attempt to formulate a difficult synthesis. Only by being exposed to the conflict between the selective, deterministic optimisation promise of code-wise design, and the dissipative, proliferating, unpredictable interpretation of cases can architects find their own, personal way to resolve it. 

Fig. 1 Norman Foster’s sketch for the headquarters of the Swiss Reinsurance Company, 30 St Mary Axe, in the historic core and the financial district of the City of London. Foster + Partners designed a fifty-storey tower 590ft (180 m) with a magnificent organic form that adds a distinctive identity to the skyline of the city.
Fig. 2 Norman Foster’s sketch illustrates the generative process: each floor is rotated by 5° relative to the one below around the central core with the pillars, bearing the vertical loads, the services, the stairs, and the lifts. From the core, six ‘spokes’ host the floorspace at each level. Each floorspace is detached from the next by a void triangular area about 20° wide. The vertically open areas create light wells for the height of the tower, up to the thirty-second floor. These open areas wound in coils to flow ventilation and natural lighting inside the building.
Fig. 3 The sketch of Norman Foster for the fully-glazed domed restaurant atop of the tower.
Fig. 4 The tapering profile of the tower allows reduced area at street level 160 ft (49 m), and reaches the largest diameter of 184 ft (56 m) at the 21st level, with the spatial climax at the glazed domed roof. The diagrid structure parametrises the A-shaped frames, and relieves the lateral loading from the central core. The A-shaped frames develop over two floors, and decrease the proportions from the 21st level respectively towards the pitched dome and the lobby level.

Fig. 5 Norman Foster’s sketch makes clear how the A-shaped frames take on the diagrid geometry with two diagonal columns of tubular steel 20 in (508 mm) diameter, reflecting in the diamond-shaped backgrounds of the window panes.

References

[1] S. Roudavski, “Towards Morphogenesis in Architecture”, International Journal of Architectural Computing, 3, 7 (2009) https://www.academia.edu/208933/Towards_Morphogenesis_in_Architecture (accessed 24 March 2021).  

[2] E. T. Miret, J. J. Polivka and M. Polivka, Philosophy of Structures, (Berkeley: University of California Press, 1958), 331.  

[3] P. L. Nervi, Aesthetics and Technology in Building (Cambridge, Mass.; London; Harvard University Press: Oxford University Press, 1966), 199. 

[4] T. Oden, K.-J. Bathe, “A commentary on Computational Mechanics”, Applied Mechanics Reviews, 31, 8 (1978), 1055-1056. 

[5] “We can now wonder whether any type of imaginary surface, is constructible. The answer is in the negative. So: how to choose and how to judge an imagined form?” E. T. Miret, J. J. Polivka and M. Polivka, Philosophy of Structures, (Berkeley: University of California Press, 1958) 78. 

[6] M. Majowiecki, “The Free Form Design (FFD) in Steel Structural Architecture–Aesthetic Values and Reliability”, Steel Construction: Design and Research, 1, 1 (2008), 1. 

[7] A. Menges, “Instrumental geometry”, Architectural Design, 76, 2 (2006), 46. 

[8] Foster and Partners, “Modeling the Swiss Re Tower”, ArchitectureWeek, 238 (2005), http://www.architectureweek.com/2005/0504/tools_1-1.html (accessed 10 April 2022) 

[9] “[Marjan] Colletti aptly quotes Deleuze stating: ‘The machine is always social before it is technical.’ The direct interaction between the designer and the equipment provides a feedback system of communication. He argues that the computer should ‘be regarded neither as abstract nor as machine’, but rather as an intraface.” C. Ahrens, “Digital Poetics, An Open Theory of Design-Research in Architecture”, The Journal of Architecture, 21, 2, (2016), 315; Deleuze’s passage is in G. Deleuze, C. Parnet, Dialogues (New York: Continuum International Publishing, 1987), 126-12; Colletti’s in M. Colletti, Digital Poetics, An Open Theory of Design-Research in Architecture (Farnham: Ashgate, 2013), 96. 

[10] “We shall therefore first lay down, that the whole Art of Building consists in the Design, and in the Structure. The whole Force and Rule of the Design, consists in a right and exact adapting and joining together the Lines and Angles which compose and form the Face of the Building. It is the Property and Business of the Design to appoint to the Edifice and all its Parts their proper Places, determinate Number, just Proportion and beautiful Order; so that the whole Form of the Structure be proportionable. Nor has this Design any thing that makes it in its Nature inseparable from Matter; for we see that the same Design is in a Multitude of Buildings, which have all the same Form, and are exactly alike as to the Situation of their Parts and the Disposition of their Lines and Angles; and we can in our Thought and Imagination contrive perfect Forms of Buildings entirely separate from Matter, by settling and regulating in a certain Order, the Disposition and Conjunction of the Lines and Angles.” L. B. Alberti, The Ten Books of Architecture (London: Edward Owen, 1755 [1450]), 25. 

[11] A. Zaera-Polo, “30 St. Mary Axe: Form Isn’t Facile”, Log, 4 (2005). 

[12] See – along with Oden, Bathe, and Majowiecki – Paul Humphreys, “Computational Empiricism”, Topics in the Foundation of Statistics, ed. by B. C. van Fraassen (Dordrecht: Springer, 1997) and P. Humphreys, Extending Ourselves: Computational Science, Empiricism, and Scientific Method. (New York: Oxford University Press, 2004). 

[13] C Alexander, Notes on the Synthesis of Form (Cambridge, Mass.; London: Harvard University Press, 1964). 

[14] J. Petitot, “Only Objectivity”, Casabella, 518, (1985), 36. 

[15] E Benvenuto, An Introduction to the History of Structural Mechanics (New York, N.Y.: Springer-Verlag, 1991). 

[16] M. Majowiecki, “The Free Form Design (FFD) in Steel Structural Architecture–Aesthetic Values and Reliability”, Steel Construction: Design and Research, 1, 1 (2008), 1. 

[17] T. Oden, K.-J. Bathe, “A commentary on Computational Mechanics”, Applied Mechanics Reviews, 31, 8 (1978), 1056. 

[18] “There are essentially two ways in which such education can operate, and they may be distinguished without difficulty. At one extreme we have a kind of teaching that relies on the novice’s very gradual exposure to the craft in question, on his ability to imitate by practice, on his response to sanctions, penalties, and reinforcing smiles and frowns. … The second kind of teaching tries, in some degree, to make the rules explicit. Here the novice learns much more rapidly, on the basis of general ‘principles’. The education becomes a formal one; it relies on instruction and on teachers who train their pupils, not just by pointing out mistakes, but by inculcating positive explicit rules.” C. Alexander, Notes on the Synthesis of Form (Cambridge, Mass.; London: Harvard University Press, 1964), 35. 

[19] T. C. Bannister, “The Research Heritage of the Architectural Profession”, Journal of Architectural Education, 1, 10 (1947). 

[20] C. P. Snow, The Two Cultures and the Scientific Revolution  (Cambridge University Press, 1962). 

[21] A. Pérez-Gómez, Architecture and the Crisis of Modern Science (Cambridge, Mass.: The MIT Press, 1983). 

[22] “Artists after the Internet take on a role more closely aligned to that of the interpreter, transcriber, narrator, curator, architect.” A. Vierkant, The Image Object Post-Internet, http://jstchillin.org/artie/vierkant.html (accessed 21 September 2015). The artist Olafur Eliasson, for instance, started up his own architectural office (https://studiootherspaces.net/, accessed 30 March 2021), and the film director Wes Anderson authored the interior design of the Bar Luce, inside the Fondazione Prada in Milan. 

[23] “Fuller … noted that species become extinct through overspecialization and that architects constitute the ‘last species of comprehensivists.’ The multidimensional synthesis at the heart of the field is the most invaluable asset, not just for thinking about the future of buildings but for thinking about the universe. Paradoxically, it is precisely when going beyond buildings that the figure of the architect becomes essential.” Mark Wigley, Buckminster Fuller Inc.: Architecture in the Age of Radio (Zürich: Lars Müller, 2015), 71. 

[24] J. Maeda, How to Speak Machine: Laws of Design for a Digital Age (London: Penguin Business, 2019). 

[25] Le Corbusier, The City of Tomorrow and its Planning (London: John Rocker, 1929 [1925]), 1. 

Suggest a Tag for this Article
Geocities’ neighbourhoods collage, 2022. Image credit: Alessandro Celli and Ibrahim Kombarji.
Geocities’ neighbourhoods collage, 2022. Image credit: Alessandro Celli and Ibrahim Kombarji.
Fostering Kinship: GeoCities’ Algorithmic Neighbourhoods
Algorithmic Neighbourhoods, civic participation, global village, Kinship, proximity, virtual city
Alessandro Celli, Ibrahim Kombarji

celli.alce@gmail.com
Add to Issue
Read Article: 3062 Words

The remains of a virtual city – possibly the first of its kind – can be found on servers all over the world.1 Geocities was launched as a series of districts, alleyways, and neighbourhoods where its inhabitants could build their own webpages. For the first time, the internet was given a structure in a way that its audience could relate to it on a human scale. Today, around 650 gigabytes of Geocities’s data remain accessible thanks to archiving efforts that ensured the recovery of some of the 38 million individual websites that existed at the time of GeoCities’ final closure in 2009. [2] [3] [4] [5] 

GeoCities was first launched in 1994 by David Bohnett and Dick Altman as a web hosting service, allowing its users to store and manage their website files. [6] Its initial name, Beverly Hills Internet, already hinted at the creators’ intention to develop a neighbourhood of websites, which would later mature into a geography of cities. The service offered a free plan with a generous two megabytes of storage to all users, known as the homesteaders, who were asked to choose a neighbourhood to reside in. [7] All of the city’s inhabitants occupied a defined space, in a defined surrounding, where their homepages were arranged within neighbourhoods. Each cluster of pages was spatially close to those which shared similar content, while each neighbourhood was defined by the broader topic into which they fit. As such, the company created and thematically organised its web directories into six neighbourhoods, which included Colosseum, Hollywood, RodeoDrive, SunsetStrip, WallStreet and West Hollywood. New neighbourhoods, as well as their suburbs, were later added as the site grew, and became part of the members’ unique web address with a sequentially assigned URL “civic address” (e.g., “www.geocities.com/RodeoDrive/54”). Chat rooms and bulletin boards were added soon after, fostering rapid growth of the city. [8] Each neighbourhood had its own forum, live chat, and even a list of all the homesteaders who celebrated their birthday each day.  

By December 1995, when it changed its name to GeoCities, Beverly Hills Internet had over 20,000 homesteaders and over 6 million page-views per month. [9] Within this expansive organisation of web page clusters, a seamless sense of proximity between those who shared similar ideas naturally led to human behaviours such as kinship and affection between them.  

Neighbourhoods are intrinsic parts of our urban fabric and a self-evident manifestation of how the cities we live in are structured. [10] Yet, we still struggle to grasp a proper definition of their totality, given the complex layers within them. In 1926, progressive educator David Snedden defined the term neighbourhood as “those people who live within easy ‘hallooing’ distance”, illustrating it as a space where one can easily catch the attention of another. [11] 

This essay will explore the notion of an algorithmic neighbourhood, one that reflects – and derives from – parts of a physically built, “hallooing” urban neighbourhood. The internet lexicon of today descends seamlessly from a long lineage of architectural and spatial terminologies, such as firewall, coding architecture, homepage, platform, address, path, room, and location, among many others. In the translation from a physical reality that is shaped within our Latourian “critical zone”, some of these terminologies have shifted in their meaning when applied to new forms of digital space. [12] A parallel “digital critical zone” is generated, within which these algorithmic neighbourhoods sit.  

Figure 1 – Archived webpage “Tia”, West Hollywood neighbourhood. Image capture April 16 2022. Source: https://geocities.restorativland.org/WestHollywood/Cafe/3232/newpics.html
Figure 2 – Archived webpage “The Gardening Girl”, Picket Fence neighbourhood. Image capture April 16 2022. Source: https://geocities.restorativland.org/PicketFence/1054/

Neighbourhood as a site of kinship and proximity  

The artisanal web built through GeoCities allowed “user-generated content”, which had not yet adorned itself with pompous names or revolutionary pretensions. [13] It proved that even before the invention of Web 2.0 – which was later aimed at implementing social-media profiles – the web was, above all, a story of human beings who interact with one another and discuss the subjects close to them through the means at hand.  

Urban studies professor Looker defines the United States as a nation of neighbourhoods. [14] This essay expands on this exposure of the continental urban fabric by exploring the communities of algorithmic kinship that exist within GeoCities’ virtual borders. Similar to physically built neighbourhoods, GeoCities’ urban structure fostered kinship and affection among its inhabitants. PicketFence, for example, was built to allow residents to share tips and advice on ‘Home Improvement Techniques’. The more experienced ‘Home Improvement’ users became the neighbourhood’s go-to people for navigating daily issues, reinforcing a shared communal knowledge. [15] 

West Hollywood, which was subdivided into “Gay, Lesbian, Bisexual, and Transgender topics”, is another example of such algorithmic kinship. This neighbourhood was a predecessor of today’s social-media spaces where users can gather and exchange (sometimes hidden or undisclosed) realities across communities. West Hollywood’s users could leave messages, sign a guestbook, and share contact information with one another. The neighbourhood gave people an opportunity to share similar experiences and daily struggles, form alliances with other communities, and tackle queer rights collectively. Moreover, West Hollywood fostered arenas of “block-level solidarity”, where “bonds and loyalties – whether as enacted on real-life pavements or as represented in stories, images, and speeches”, allowed connections between the intimate lives of users, their GeoCities pages, and the “city block”. [16] 

Proximity and reciprocal kinship were thus a foundational feature of GeoCities’ design: individuals, together with their personal pages, were at the centre of the Internet. In contrast, today’s platforms and digital services are structured in such a nested way that proximity is sometimes inconceivable, and individuals are reduced to anonymous consumers of information. Today, the information communications technology industry (ICT) is at the centre of the Internet. [17] Social media platforms still provide virtual spaces that allow communities to gather and share content with one another, fostering a certain degree of human interaction. However, the very structure within which they operate is fundamentally different from the ones used in early platforms such as GeoCities. While before, the digital matter – text, images, links – was spatially placed onto the transparent structure of the webpage, and you could clearly see the location of a jpeg file within the HTML lines of code, now it all runs through opaque interfaces. [18] These perfect facades are quasi-impenetrable for users, and hide the “black boxes” where algorithms operate as instruments of measurements and perception. [19] As a counterpart to algorithmic neighbourhoods, Caroline Busta defines social-media platforms as a grand bazaar, “with lanes of kiosks, grouped roughly by trade, displaying representative works to passers-by. At the back of the mini-shop is a trap door with stairs leading to a sub-basement where deals can be done”. [20] This multi-layered opaque architecture of the bazaar illustrates the complex structure that currently governs social-media platforms. In contrast, the algorithmic neighbourhoods of GeoCities attempted to encourage a transparent vision of the modes of portraiture in the digital realm, and defined tools for users to relate directly to it. 

Figure 3 – Archived webpage “Gay Ukraine International, Kiev, UA”, West Hollywood neighbourhood. Image capture April 16 2022. Source: https://geocities.restorativland.org/WestHollywood/Club/1213/
Figure 4 – Archived webpage “Welcome to the deep Heart of TEXAS and Our Home”, Picket Fence neighbourhood. Image capture April 16 2022. Source: https://geocities.restorativland.org/PicketFence/1011/

Neighbourhood as a site constantly ‘under construction’   

A digital archaeologist scavenging through GeoCities’ remains would come across a vast number of “under construction” signs strewn across the neighbourhood’s alleys, outlining its “work-in-progress” state. Surrounded by virtual scaffolding, the pages under construction were built, line after line of code, by the homesteaders, slowly undergoing organic changes and upgrades. Each individual page was constructed by its creator, from its foundations to its decorative elements, in the HTML format – the HyperText Markup Language. The coding language not only allowed users to build their pages from scratch, but also to introduce multimedia resources such as JPGs and GIFs. A page under construction implies that there was a process of creation, which aimed at an eventual final form. Similar to a construction site, the individual web page could be openly observed throughout its making, as it could be visited by GeoCities inhabitants at any moment in time. It was a facade yet to come; a page that was shaped by the algorithmic manipulation of its users as they added another ‘about me’ section, a ‘guestbook’ to be signed, or a photo gallery of low-res pictures – to fit within the 2 megabytes limit – portraying their personal lives. 

Differently, the architecture of new forms of webpages and content aggregators is now conceived with an opaque algorithmic structure. Their virtual space is not one of proximity and distance based on intelligible parameters, but one of hierarchical appearance and disappearance based on unintelligible instruments of perception. [21] For instance, Google’s page-ranking algorithm mutates and evolves over time, leaving no traces behind, except the ones it uses to train itself. When presented with Google search results, users are faced with a series of temporary choices that are the result of a very intricate mechanism of automatic selection and classification. Vladan Joler defines algorithms as “instruments of measurements and perception”; thus, algorithmic architecture can be outlined by an operation of the more-than-human. Data collection and consumer profiling are the parameters upon which the current Internet is being built, instead of it being a conscious construction process carried out by its users. 

While the architectural backdrop of a platform is constantly being redefined based on who is interacting with it, its facade – the interface – is pure and familiar. This interface which we constantly visit, however, obscures what’s beneath it. Even if it is a clear manifestation of rules, as it tells you what you can or cannot do, it does not reveal through which mechanisms it gathers and conveys information, nor how the user’s actions are exploited for profitable means. The algorithmic design of GeoCities, based on neighbourhood alliances, had not yet allowed for this opacity, avoiding instances of power structures, black boxes, and opaque interfaces. It also avoided entering the black hole of rhizomatic surveillance that now permeates the virtual realm. [22] [23]  

Algorithmic neighbourhoods can also help to expose the physical infrastructure hosting them. Similarly, to the opaqueness of interfaces, our built neighbourhoods are shaped by an underground infrastructure of fleshly cables and routers. Data centres, globally connected by a web of cables, host our digital selves, which wander through the unmeasurable geographies of the Internet. They are out of reach, transcending any geographical boundary, as they mirror the ubiquitous nature of algorithmic spaces. Cables and data centres are, in fact, the physical side of the Internet, its thickness on our planet. They are the physical neighbourhood mirroring the algorithmic one, hosting the latter through servers, cables, connections, and energy. The physical neighbourhood which creates the digital infrastructure is not, however, a direct reflection of the algorithmic one. It is instead expansive, ubiquitous, fragmented, and absent, as it is designed to operate under strict safety protocols and privacy regulations.  

Figure 5 – Archived webpage “Q Pals”, West Hollywood neighbourhood. Image capture April 16 2022. Source: https://geocities.restorativland.org/WestHollywood/Cafe/3113/
Figure 6 – Archived webpage “Monica Munro”, West Hollywood neighbourhood. Image capture April 16 2022. Source: https://geocities.restorativland.org/WestHollywood/Club/2788/

Neighbourhood as a site of civic participation and resistance  

In June 1998, in order to boost brand awareness and advertising impressions, GeoCities introduced a watermark on its users’ web pages. [24] The watermark, much like an on-screen graphic on some TV channels, was a transparent floating GIF image that used JavaScript to stay displayed at the bottom right of the browser window. Many users felt that the watermark interfered with their website design, and threatened to move their pages elsewhere. A year later, in 1999, Yahoo bought the platform and consequently implemented its “Terms of Service agreement” leading to a unanimous reaction by the homesteaders. [25] The “Haunting of GeoCities” was the users’ response to the threat over content rights and access control. Each neighbourhood became a ghost town, where homepages were stripped of their content and colours, replaced with excerpts of the offending Terms of Service. As authors Reynolds and Hallinan point out, “users sensed that Yahoo’s unfettered access to this content threatened their creative control and diluted their power to make decisions about how and where to display their content. … some enterprising homesteaders sought to foil Yahoo’s legal and digital access to their intellectual property by removing it from the service altogether”. [26] The collective operation, moreover, represented a strategic mobilization of GeoCities design, defined by co-founder David Bohnett as “a bottoms-up, user-generated content mode”. [27] [28] The homesteaders’ remarkable political response allowed them to preserve a certain degree of control over their content, interfering with the dominating “Terms of Service agreement” which regulates, even more so today, every action we take within a platform. 

The “Haunting” protest represented a point of resistance towards the tendency of tech-giants to channel social traffic through a corporate digital platform ecosystem – a ubiquitous model in today’s internet. [29] The organized response by the homesteaders was only possible by the virtue of the very architecture of GeoCities. Neighbourhoods allowed a bottom-up response that could contrast the overarching corporate control put in place by Yahoo. It was a gathering that was empowered by proximity and affection, while it could exploit the temporary nature of the homepages’ construction as a medium for political change. In 2009, in response to the termination of GeoCities by Yahoo, new mechanisms of neighbourly rebuttal emerged. The German hosting provider JimdoWeb, for instance, attempted to host the nomad homesteaders by launching the Lifeboat for GeoCities webpage. Simultaneously, efforts of internet archivists started to meticulously archive each homepage of GeoCities in a countering act to preserve memory and gather residues of the city. 

The archived remains of the virtual city stand as an alternative approach to the complexity and opaqueness of the algorithmic layering of contemporary web-hosting services, as much as they reveal the ‘trans-scalar’ infrastructure of the Internet. [30] These neighbourly entanglements help us make sense of the current digital “global village”, offering an entry point to analyse how it is being shaped by the effects of globalisation, market economies, and imprudent media. [31] [32] Moreover, they display how the global village is being governed by algorithmic interdependencies, which in turn affect the architectural formations in both virtual and physical realities. [33]  

Figure 7 – Archived webpage “Gay Denton”, West Hollywood neighbourhood. Image capture April 16 2022. Source: https://geocities.restorativland.org/WestHollywood/Cafe/1979/Pages/gaydenton.html
Figure 8 – Geocities’ neighbourhoods collage, 2022. Image credit: Alessandro Celli and Ibrahim Kombarji.

References

[1] Archive Team. Archiveteam.org. https://wiki.archiveteam.org/index.php?title=Main_Page (accessed April 16, 2022).

[2] R. Vijgen. “The Deleted City”, http://www.deletedcity.net/, (2017)

[3] Restorativland, “The Geocities Gallery”, https://geocities.restorativland.org/, (accessed March 1, 2022).

[4] “OoCities”, https://www.oocities.org/#gsc.tab=0, (accessed March 1, 2022).

[5] O. Lialina & D. Espenschied, “One Terabyte of Kilobyte Age”, Rhizome.org. https://anthology.rhizome.org/one-terabyte-of-kilobyte-age, (accessed March 1, 2022).

[6] A.J. Kim, Community Building on the Web: Secret Strategies for Successful Online Communities (United Kingdom: Pearson Education, 2006).

[7] B. Sawyer, D Greely, Creating GeoCities Websites, (Cincinnati, Ohio: Muska & Lipman Pub, 1999) .

[8] Ibid.

[9] C. Bassett, The arc and the machine: Narrative and new media, (Manchester: Manchester University Press, 2013).

[10] J. Jacobs, “The City: Some Myths about Diversity”, The death and life of great American cities, (New York: Random House, 1961).

[11] R. Sampson, “The Place of Context: A Theory and Strategy for Criminology’s Hard Problems”, Criminology 51 (The American Society of Criminology, 2013).

[12] B. Latour, Critical Zones: The Science and Politics of Landing on Earth, (Cambridge, MA: MIT Press, 2020).

[13]  B. Sawyer, D Greely, Creating GeoCities Websites, (Cincinnati, Ohio: Muska & Lipman Pub, 1999).

[14] B. Looker, A Nation of Neighborhoods: Imagining Cities, Communities, and Democracy in Postwar America, (Chicago: The University of Chicago Press, 2015).

[15] Ibid.

[16] Ibid.

[17] C. Busta, “Losing Yourself in the Dark”. Open Secret, KW Institute for Contemporary Art, https://opensecret.kw-berlin.de/essays/losing-yourself-in-the-dark/, (accessed April 16, 2022).

[18] S.U. Noble, Algorithms of Oppression: How Search Engines Reinforce Racism,. (United States: NYU Press, 2018).

[19] V. Joler, “New Extractivism”, Open Secret, KW Institute for Contemporary Art, https://opensecret.kw-berlin.de/artwork/new-extractivism/, (accessed April 16, 2022).

[20]  C. Busta, “Losing Yourself in the Dark”. Open Secret, KW Institute for Contemporary Art, https://opensecret.kw-berlin.de/essays/losing-yourself-in-the-dark/, (accessed April 16, 2022).

[21]  V. Joler, “New Extractivism”, Open Secret, KW Institute for Contemporary Art, https://opensecret.kw-berlin.de/artwork/new-extractivism/, (accessed April 16, 2022).

[22] D. Savat, “(Dis)Connected: Deleuze’s Superject and the Internet”, International Handbook of Internet Research, 423–36 (Dordrecht: Springer, 2009).

[23] K.D. Haggerty, R. Ericson, “The Surveillant Assemblage”. British Journal of Sociology, 51, 4, 605-622, (United Kingdom: Wiley-Blackwell for the London School of Economics, 2000).

[24] J. Hu, “GeoCitizens fume over watermark”, CNet.com, https://www.cnet.com/tech/services-and-software/geocitizens-fume-over-watermark/ (accessed March 1, 2022).

[25] R. Ku, Cyberspace Law: Cases and Materials, (New York: Wolters Kluwer, 2016).

[26] C. Reynolds, B. Hallinan, “The haunting of GeoCities and the politics of access control on the early Web”, New Media & Society, (United States: SAGE Publishing, 2021).

[27] Ibid.

[28] B McCullough, “Interview with David Bohnett, founder of GeoCities”. Internet History Podcast, http://www.internethistorypodcast.com/2015/05/david-bohnett-founder-of-geocities/, (accessed April 16, 2022).

[29] J. Van Dijck, T. Poell, M. De Waal, The Platform Society: Public Values in a Connective World, (Oxford: Oxford University Press, 2018).

[30] A. Jaque, Superpowers of Scale, (New York: Columbia University Press, 2020).

[31] M. McLuhan, The Gutenberg galaxy: the making of typographic man (Toronto: University of Toronto Press, 1962).

[32] T. Friedman, The World Is Flat: A Brief History of the Twenty-First Century, (New York: Farrar, Straus and Giroux, 2005). [1] M. McLuhan, The Gutenberg galaxy: the making of typographic man, (Toronto: University of Toronto Press, 1962).

Suggest a Tag for this Article
Figure 8 - Extraction process – on the left the digital model, and on the right the sequence of instructions resulting from the extraction process. 
Figure 8 – Extraction process – on the left the digital model, and on the right the sequence of instructions resulting from the extraction process. 
Algorithmic Representation Space
Algorithmic Abstractness, Algorithmic Design, Algorithmic Representation Space, Design Paradigms, Model Concreteness, Representation Method, Representation Space
Renata Alves Castelo Branco, Inês Caetano, António Leitão

renata.castelo.branco@tecnico.ulisboa.pt
Add to Issue
Read Article: 5587 Words

Introduction 

Architecture has always explored the latest technological advances, causing changes in the way architects represent and conceive design solutions. Over the past decades, these changes were mostly due to, first, the integration of new digital design tools, such as Computer-Aided Design (CAD) and Building Information Modelling (BIM), which allowed the automation of paper-based design processes [1], and then, the adoption of computational design approaches, such as Algorithmic Design (AD), causing a more accentuated paradigm shift within the architectural practice. 

AD is a design approach based on algorithms that has been gaining prominence in both architectural practice and theory [2,3] due to its greater design freedom and ability to automate repetitive design tasks, while facilitating design changes and the search for improved solutions. Its multiple advantages have therefore motivated a new generation of architects to increasingly adopt the programming environments behind their typical modelling tools, going “beyond the mouse, transcending the factory-set limitations of current 3D software” [3; p. 203]. Unfortunately, its algorithmic nature makes this approach highly abstract, deviating from the visual nature of human thinking, which is more attracted to graphical and concrete representations than to alphanumerical ones.  

To approximate AD to the means of representation architects typically use and thereby make the most of its added value for the practice, we need to lower the existing comprehension barriers, which hinder its widespread adoption in the field. To that end, this research proposes a new approach to the representation of AD descriptions – the Algorithmic Representation Space (ARS) – that encompasses, in addition to the algorithm, its concrete outputs and the mechanisms that contribute to its understanding. 

Algorithmic Representation Method and Design Paradigms

Despite the cutting-edge aura surrounding it, AD is a natural consequence of architects’ desire to automate modelling tasks. In this approach, the architect develops algorithms whose execution creates the digital design model [4] instead of manually modelling it using a digital design tool. Compared to traditional digital modelling processes, AD is advantageous in terms of precision, flexibility, automation, and ease of change, allowing architects to explore wider design spaces easily and quickly. Two AD paradigms currently predominate, the main difference between them lying in the way algorithms are represented: architects develop their algorithms either textually, according to the rules of a programming language, or visually, by selecting and connecting graphical entities in the form of graphs [5]. In either case, the abstract nature of the medium hinders its comprehension. 

Algorithms are everywhere and are a fundamental part of current technology. In fact, digital design tools have long supported AD, integrating programming environments of their own to allow users to automate design tasks and deal with more complex, unconventional design problems. Unfortunately, despite its advantages and potential to overcome traditional design possibilities, AD was slow to gain ground in the field, remaining, after almost sixty years, a niche approach. One of the main reasons is the fact that it requires architects to learn programming, which is an abstract task that is far from trivial. This is aggravated by the fact that, for decades, most tools have had their own programming language, which in most cases was limited and hard to use, as well as a programming environment providing little support for the development and comprehension of algorithmic descriptions. Examples include ArchiCAD’s GDL (1983); AutoCAD’s AutoLisp (1986) and Visual Lisp (2000); 3D Studio Max’s MAXscript (1997); and Rhinoceros 3D’s Rhino.Python (2011) and RhinoScript (2007). 

To make AD more appealing to architects and approximate it to the visual nature of architectural design processes, visual-based AD environments have been released in the meantime. In these environments, text-based algorithmic descriptions are replaced by iconic elements that can be connected to each other in dataflow graphs [6]. Generative Components (2003) is a pioneering example that inspired more recent ones such as Grasshopper (2007) and Dynamo (2011). These tools offer a database of pre-defined operations (components) that users can access by simply dragging an icon onto the canvas and providing it with input parameters. For standard tasks covered by existing components, this speeds up the modelling task considerably. Furthermore, since programs are represented by graph structures – with nodes describing the functions, and the wires connecting them describing the data that gets transferred between them – it is easy to see which parts of the algorithm are dependent upon others, and thus, where the changes are propagated to. However, this is only true for small algorithms, which are a rare find in visual-AD descriptions [7]. Therefore, despite solving part of the existing problems – which explains the growing popularity of this paradigm in the community – others have emerged, such as its inability to deal with more complex and larger-scale AD solutions [5,8,9]. 

In sum, AD remains challenging for most architects and a far cry from the representation methods they typically use. Human comprehension relies on concrete instances to create mental models of complex concepts [10]. Contrastingly, AD, either visual or textual, operates at a highly abstract level. This grants it its flexibility but also hinders its comprehension. 

Algorithmic Abstractness Vs Model Concreteness 

Abstraction can be regarded as the process of removing detail from a representation and keeping only the relevant features [11]. Some authors believe abstraction improves productivity: it not only focuses on the “big idea” or problem to solve [12] but also triggers creative thinking due to its vagueness, ambiguity, and lack of clarity [13].  

Abstraction in architecture can be traced back at least as far as classical antiquity. Architectural treatises, such as Vitruvius’ “Ten Books on Architecture” [14], are prime examples of abstract representations because they intend to convey not specific design instances, but rather design norms that are applicable to many design scenarios. However, the human brain is naturally more attracted to graphical explanations than textual ones [15–17], a tendency that is further accentuated in a field with a highly visual culture such as architecture. For that reason, even the referred treatises were eventually illustrated after the birth of the printing press [18]. 

The algorithmic nature of AD motivates designers to represent their ideas in an abstract manner, focusing on the concept and its formal definition. This sort of representation provides great flexibility to the design process, as a single expression of an idea can encompass a wide range of instances that match that idea, i.e., a design space. Contrariwise, most representation methods, including CAD and BIM, compel designers to rapidly narrow down their intentions towards one concrete instance, on account of the labour required to maintain separate representations for each viable alternative. 

In sum, abstraction gives AD flexibility and the ability to solve complex problems, but it also makes it harder to understand. Abstraction is especially relevant when dealing with mathematical concepts, such as recursion or parametric shapes; nature-inspired processes, such as randomness; and performance-based design principles, such as design optimisation. It is also critical when developing and fabricating unconventional design solutions, whose geometric complexity requires a design method with a higher level of flexibility and accuracy. Sadly, these are also the hardest concepts to grasp without concrete instances and visual aid. 

Nevertheless, the described comprehension barrier, apparently imposed by the abstract-concrete dichotomy, is more obvious when the AD descriptions are independent entities with little to no connection to the outcomes they produce. Figure 1 represents the current conception of AD: there is a parametric algorithm, representing a design space, which can generate a series of design models when specific parameters are provided. We propose to overthrow this notion by including the outcomes of the algorithm in the design process itself, changing the traditional flow of design creation to accommodate more design workflows and comprehension approaches.   

Figure 1 – AD workflow – an algorithm, representing a design space, generates a digital model for each design instance. 

Algorithmic Representation Space 

AD descriptions have an abstract nature, which is part of the reason they prove so beneficial to the architectural design process. However, when it comes to comprehending an AD – i.e., creating a mental model of the design space it represents – this feature becomes a burden. Human cognition seems to rely heavily on the accumulation of concrete examples to form a more abstract picture [10]. For this reason, we advocate that, for a better comprehension of an AD, the algorithms themselves do not suffice.  

This research proposes a new way to represent algorithmic descriptions that aids the development and understanding of AD projects. Under the name of Algorithmic Representation Space (ARS), this concept encompasses not only the algorithm but also its outcomes and the mechanisms that allow for the understanding of the design space it represents. AD descriptions stand to benefit significantly from the concreteness of the outputs they generate, i.e., the digital models. If we consider the models as part of the AD representation, we reduce its level of abstraction and increase its understandability, approximating it to the visual nature of human understanding. Nevertheless, we must also smooth its integration in more traditional design workflows, helping architects who still develop their models manually in digital design tools or are forced to use pre-existing models. Accordingly, the proposed ARS also enables the use of already existing digital models as starting points to arrive at an algorithmic description. 

There are two core elements in the ARS (Figure 2), the algorithm and the model. The algorithm represents a design space in a parametric abstract way, which makes the multiple design alternatives it represents difficult to perceive. Contrastingly, each model represents an instance of a design space in a static but concrete way. Combining the former’s flexibility with the latter’s perceptibility is therefore critical for the success of algorithmic representation. For conceptual reasons, the presented illustration of the ARS levels the two elements. Nevertheless, one must keep in mind that the algorithm can generate potentially infinite digital models, and the concept holds for all of them.  

We consider two entry points into the ARS: programming and modelling. Each will allow architects to traverse the ARS; in the former case, from algorithm to model, by running the instructions in the algorithm to generate a model; and in the latter, from model to algorithm, by extracting an algorithmic description capable of generating the design instance and then refactoring that description to make it parametric as well. In either case, it is important the ARS contemplates the visualisation of these algorithm-model relationships. Therefore, we propose including techniques such as traceability in any ARS. In the following section, we will use a case study, the Reggio Emilia Train Station by Santiago Calatrava, to illustrate the ARS and each of the proposed principles. 

Figure 2 – Building blocks of the ARS. 

Programming 

The typical AD process entails the creation of a parametric description that abstractly defines a design space according to the boundaries set by the architect (Figure 3). The parametricity of this description, or the size of the design space it represents, varies greatly with the design intent and the way it is implemented (e.g., degrees of freedom, rules, and constraints). By instantiating the parameters in the algorithm, the architect specifies instances of the design space, whose visualisation can be achieved by generating them in a digital design tool, such as a CAD, BIM, or game engine (Figure 3 – running the algorithm). Figure 4 presents several variations of the Reggio Emilia station achieved by running the corresponding AD description with varying input parameters, namely with a different number of beams, different beam sizes, and different amplitudes and phases of the sinusoidal movement. 

Given the flexibility of this approach, the process of developing AD descriptions tends to be a very dynamic one, with the architect repeatedly generating instances of the design to assess the impact of the changes made at each stage. Consciously or not, architects already work in a bidirectional iterative way when using AD. However, this workflow can also greatly benefit from a more obvious showcasing of the existing relations between algorithm and model. Traceability mechanisms allow precisely for the visual disclosure of these relations (i.e., which instruction/component generated which geometry), and several AD tools support them already. 

A picture containing timeline

Description automatically generated
Figure 3 – Entering the ARS by programming. 
Figure 4 – Parametric variations of the Reggio Emilia station, with different numbers and sizes of beams, and different amplitudes and signs of the sinusoidal movement. 

Creating Models 

AD is not meant to replace other design approaches but, instead, to interoperate with them. This interoperability is important, to take advantage of the investment made into those well-established representation methods such as CAD and BIM, especially for projects where digital models already exist or are still being produced. Therefore, the second entry point to the ARS is the conversion of an existing digital model of a design into an AD program. This might be necessary, for instance, when we wish to optimise it for new uses and/or to comply with new standards [19]. This process entails crossing the ARS in the opposite direction to that described in the previous section (Figure 5). 

To convert a digital model into an AD description, there are two main steps: extraction and refactoring. Extraction entails the automatic generation of instructions that can reproduce an exact copy of the model being extracted. The resulting AD description, however, is non-parametric and of difficult comprehension. This is where refactoring comes in [20,21], a technique that helps to improve the AD description, increasing its readability and parametricity. While the first task can be almost entirely automated, and is currently partially supported by some AD tools, the second part depends heavily on the architect’s design intent and, thus, will always be a joint effort between man and machine. In either case, it is important that the ARS adapts to the multiplicity of digital design tools and representation systems that architects often use during their design process. They can use, for instance, 3D modelling tools, such as CADs or game engines, to geometrically explore their designs more freely, or BIM tools to enrich the designs with construction information and to produce technical documentation.  

Figure 5 – Entering the ARS through modelling. 

Navigating the ARS 

As mentioned in the previous section, there are two main elements in the ARS: algorithms abstractly describing design spaces and digital models representing concrete instances of those design spaces. Either one can be accessed from either end of the spectrum, i.e., by programming and running the algorithm to generate digital models, or by manually modelling designs and then converting them into an algorithm. To allow for this bidirectionality between the two sides, the ARS relies on three main mechanisms: (a) traceability, (b) extraction, and (c) refactoring. The first allows the system to expose the existing relationships between algorithm and model in a visual and interactive way for a better comprehension of the design intent. The latter two allow us to traverse the ARS from model to algorithm, a less common crossing but an essential one, nevertheless. The following sections describe these three mechanisms in detail. 

Traceability 

For a proper comprehension of ADs, architects must construct a mental model of the design space, comprehending the impact each part of the algorithm has in each instance of the design space. To that end, a correlation must be ever present between the two core elements of the ARS – algorithm and model – matching the abstract representation with its concrete realisation. Traceability establishes relationships amongst the instructions that compose the algorithm and the corresponding geometries in the digital model. This is particularly relevant when dealing with complex designs, as it allows architects to understand which parts of the algorithm are responsible for generating which parts of the model.  

With traceability, users can select parts of the algorithm or parts of the model and see the corresponding parts highlighted in the other end. Grasshopper for Rhinoceros 3D and Dynamo for Revit, two visual AD tools, present unidirectional traceability mechanisms from the algorithm to the model. Figure 6 shows this feature at play in Grasshopper: users select any component on the canvas and the corresponding geometry is highlighted in the visualised model. 

Diagram

Description automatically generated
Figure 6 – Traceability in visual AD tools – the case of Grasshopper. 

Regarding bidirectional traceability, there are already visual AD tools that support it, such as Dassault Systèmes’ xGenerative Design tool (xGen) for Catia and Bentley’s Generative Components, as well as textual AD tools, such as Rosetta [22], Luna Moth [23], and Khepri [24]. Figure 7 shows the example of Khepri, where the user selects either instructions in the algorithm or objects in the model and the corresponding part is highlighted in the model or algorithm, respectively. Programming In the Model (PIM) [25], a hybrid programming tool, offers traceability between the three existing interactive windows: one showing the model, another the visual AD description, and a third showing the equivalent textual AD description. 

Unfortunately, traceability is a computationally intensive feature that hinders the tools’ performance with complex AD programs – especially model-to-algorithm traceability, which explains why some commercial visual-based AD tools avoid it. Those that provide it inevitably experience a decrease in performance as the model grows. All referred text-based and hybrid options are academic works, built and maintained as proof of concept and not as commercial tools, which explains their acceptance of the imposed trade-offs. A possible solution for this problem is to allow architects to decide when to use this feature and only switch it on when the support provided compensates for the computational overhead [26]. In fact, traceability-on-demand is Khepri’s current approach to the problem. 

Text

Description automatically generated with low confidence
Figure 7 – Traceability in textual AD tools – the case of Khepri. 

Extraction 

Extraction is the automatic conversion of a digital model into an algorithm that can faithfully replicate it. Previous studies [27,28] focused on the generation of 3D models from architectural plans or on the conversion of CAD to BIM models, using heuristics and manipulation of geometric relations. Sadly, the result is not an AD description, but rather another model, albeit more complex and/or informed. One promising line of research is the use of the probabilistic and neural-based machine learning techniques (e.g., convolutional or recurrent neural networks) that address translation from images to textual descriptions, [29] but further research is needed to generate algorithmic descriptions. 

The main problems with extracting a parametric algorithm lie, first, in the assumptions the system would need to make while reading a finished model: for instance, distinguishing whether two adjacent volumes are connected by chance or intentionally and, if the latter, deciding if such connection should constitute a parametric restriction of that model or not. Secondly, it is nearly impossible to devise a system that can consider the myriad of possible geometrical entities and semantics available in architectural modelling tools. 

Some modelling tools that favour the VP paradigm avoid this problem by placing the responsibility on the designer from the very start, restricting the modelling workflow and forcing the designer to provide the missing information. In xGen and Generative Components, the 3D model and the visual algorithm are in sync, meaning changes made in either one are reflected in the other. PIM presents a similar approach, extending the conversion to the textual paradigm as well, although it was only tested with simple 2D examples.  

In practice, these tools offer real-time conversion from the model to the algorithm. However, either solution requires the model to be parametric from the start. Every modelling operation available in these tools has a pre-set correspondence to a visual component, and designers must build their models following the structured parametric approach imposed by each tool, almost as if they were in fact constructing an algorithm but using a modelling interface. As such, the system is gathering the information it needs to build parametric relations from the very beginning. This explains why neither xGen, nor Generative Components, nor PIM, can take an existing model created in another modelling software or following other modelling rules and extract an algorithmic description from it. 

This problem has also been addressed in the TP field and promising results have been achieved in the conversion of bi-dimensional shapes into algorithms [24,30]. However, further work is required to recognise 3D shapes, namely 3D shapes of varying semantics, since architects can use a myriad of digital design tools to produce their models, such as CADs, BIMs, or game engines. Figure 8 presents an ideal scenario, where the ARS is able to extract an algorithm that can generate an identical model to that being extracted. 

In either case, even if we arrive at the extraction of the most common 3D elements any time soon, the resulting algorithm will only accurately represent the extracted model, and it will comprise a low-level program, which is very hard for humans to understand. To make the algorithm both understandable and parametric, it needs to be further transformed according to the design intent envisioned by the architect. Increasing the algorithm’s comprehension level and the design space it represents is the goal of refactoring. 

Diagram

Description automatically generated with medium confidence
Figure 8 – Extraction process – on the left the digital model, and on the right the sequence of instructions resulting from the extraction process. 

Refactoring 

Refactoring (or restructuring) is commonly defined as the process of improving the structure of an existing program without changing its semantics or external behaviour [20]. There are already several semi-automatic refactoring tools [21] that help to improve the readability and maintenance of algorithmic descriptions and increase their efficiency and abstraction level. Refactoring is an essential follow-up to an extraction process, since the latter returns a non-parametric algorithm that is difficult to decipher. 

Figure 9 shows an example of a refactoring process that could take place with the algorithm extracted in Figure 8. The extracted algorithm contains numerous instructions, each responsible for generating a beam between two spatial locations defined by XYZ coordinates. It is not difficult to infer the linear variations presented in the first and fourth highlighted columns, which correspond to the points’ X values. To infer the sinusoidal variation in the remaining values, however, more complex curve-fitting methods would have to be implemented [31]. 

In either case, refactoring tools seldom work alone, meaning that a lot of user input is required. This is because there is rarely a single correct way of structuring algorithms, and the user must choose which methods to implement in each case. Refactoring tools, beyond providing suggestions, guarantee that the replacements are made seamlessly and do not change the algorithm’s behaviour. When trying to increase parametric potential, even more input is required, since it is the architect who must decide the degrees of freedom shaping the design space. 

In our example (Figure 9), the refactored algorithm shown below has a better structure and readability but is still in an infant state of parametricity. As a next stage, we could start by replacing the numerical values proposed by the refactoring tool with variable parameters to allow for more variations of the sinusoidal movement. 

Discussion and Conclusion 

Architecture is an ancient profession, and the means used to produce architectural entities have constantly changed, not only integrating the latest technological developments, but also responding to new design trends and representation needs. Architects have long adopted new techniques to improve the way they represent designs. However, while, for centuries, this caused gradual changes in the architectural design practice, with the more accentuated technological development witnessed since the 60s, these modifications have become more evident. The emergence of personal computers, followed by the massification of Computer-Aided Design (CAD) and Building Information Modelling (BIM) tools, allowed architects to automate their previously paper-based design processes [1], shaping the way they approached design issues [32]. However, these tools did little to change the way designs were represented, only making their production more efficient. It did not take long for this scenario to rapidly evolve with the emergence of more powerful computational design paradigms, such as Algorithmic Design (AD). Despite being more abstract and thus less intuitive, this design representation method is more flexible and empowers architects’ creative processes. 

Given its advantages for architectural design practice, AD should be a complement to the current means of representation. However, to make AD more appealing for a wider audience and allow architects to make the most of it, we must lower the existing barriers by approximating AD to the visual and concrete nature of architectural thinking. To that end, we proposed the Algorithmic Representation Space (ARS), a representation approach that aims to replace the current one-directional conception of AD (going from algorithms to digital models) with a bidirectional one that additionally allows architects to arrive at algorithms starting from digital models. Furthermore, the ARS encompasses as means of representation not only the algorithmic description but also the digital model that results from it, as well as the mechanisms that aid the comprehension of the design space it represents.  

A picture containing table

Description automatically generated
Figure 9 – Refactoring process – the sequence of extracted instructions (on top) is converted onto a more comprehensible and parametric algorithm (on the bottom). 

The proposed system is based on two fundamental elements – the algorithm and the digital model – and architects have two ways of arriving at them – programming and modelling. Considering the first case, programming, the ARS supports the development of algorithms and the subsequent visualisation of the design instances they represent by running the algorithm with different parameters. In the second case, modelling, the ARS supports the conversion of digital models into algorithms that reproduce them. The first scenario allows AD representations to benefit from the visual nature of digital design tools, reducing the innate abstraction of algorithms and obtaining concrete instances of the design space that are more perceptible to the human mind. The second case enables the conversion of a concrete representation of a design instance into an abstract representation of a design space, i.e., a parametric description that can generate possible variations of the original design, benefiting from algorithmic flexibility and expressiveness in future design tasks.  

To allow for this bidirectionality, the ARS relies on three main mechanisms: (a) traceability, (b) extraction, and (c) refactoring. Traceability addresses the non-visual nature of the first process – programming – by displaying the relationships between the algorithm and the digital model. Extraction and refactoring address the complexity of the second process – going from model to algorithm – the former entailing the extraction of the algorithmic instructions that, when executed, generate the original design solution, and the latter solving the lack of parametricity and perceptibility of the extracted algorithms by helping architects restructure them. The result is a new representation paradigm with enough (1) expressiveness to successfully represent architectural design problems of varying complexities; (2) flexibility to parametrically manipulate the resulting representations; and (3) concreteness to easily and quickly comprehend the design space embraced.  

The proposed ARS intends to motivate a more widespread adoption of AD representation methods. However, it is currently only a theoretical outline. To reach its goal, the proposed system must gain a practical character. As future work, we will focus on applying and evaluating the ARS in large-scale design scenarios, while retrieving user feedback from the experience. 

Acknowledgments 

This work was supported by national funds through Fundação para a Ciência e a Tecnologia (FCT) (references UIDB/50021/2020, PTDC/ART-DAQ/31061/2017) and PhD grants under contract of FCT (grant numbers SFRH/BD/128628/2017, DFA/BD/4682/2020). 

References 

[1] S. Abubakar and M. Mohammed; Halilu, “Digital Revolution and Architecture: Going Beyond Computer-Aided Architecture (CAD)”. In Proceedings of the Association of Architectural Educators in Nigeria (AARCHES) Conference (2012)., 1–19.  

[2] R. Oxman, “Thinking difference: Theories and models of parametric design thinking”. Design Studies (2017), 1–36. DOI:http://doi.org/10.1016/j.destud.2017.06.001 

[3] K. Terzidis, “Algorithmic Design: A Paradigm Shift in Architecture ?” In Proceedings of the 22nd Education and research in Computer Aided Architectural Design in Europe (eCAADe) Conference, Copenhagen, Denmark (2004), 201–207. 

[4] I. Caetano, L. Santos, and A. Leitão, “Computational design in architecture: Defining parametric, generative, and algorithmic design.” Frontiers of Architectural Research 9, 2 (2020), 287–300. DOI:https://doi.org/10.1016/j.foar.2019.12.008 

[5] P. Janssen, “Visual Dataflow Modelling: Some thoughts on complexity”. In Proceedings of the 32nd Education and research in Computer Aided Architectural Design in Europe (eCAADe) Conference, Newcastle upon Tyne, UK (2014), 305–314 

[6] E. Lee and D. Messerschmitt, “Synchronous data flow”. Proceedings of the IEEE 75, 9 (1987), 1235–1245. DOI:https://doi.org/10.1109/PROC.1987.13876 

[7] D. Davis, “Modelled on Software Engineering: Flexible Parametric Models in the Practice of Architecture”. PhD Dissertation, RMIT University (2013). 

[8] A. Leitão and L. Santos, “Programming Languages for Generative Design: Visual or Textual?” In Proceedings of the 29th Education and research in Computer Aided Architectural Design in Europe (eCAADe) Conference, Ljubljana, Slovenia (2011),139–162. 

[9] M Zboinska, “Hybrid CAD/E Platform Supporting Exploratory Architectural Design”. CAD Computer Aided Design 59, (2015), 64–84. DOI:https://doi.org/10.1016/j.cad.2014.08.029 

[10] D. Rauch, P. Rein, S. Ramson, J. Lincke, and R. Hirschfeld, “Babylonian-style Programming: Design and Implementation of an Integration of Live Examples into General-purpose Source Code”. The Art, Science, and Engineering of Programming, 3, 3 (2019), 9:1-9:39. DOI:https://doi.org/10.22152/programming-journal.org/2019/3/9 

[11] H. Abelson, G.J. Sussman, and J. Sussman (1st ed. 1985), Structure and Interpretation of Computer Programs  (Cambridge, Massachusetts, and London, England: MIT Press, 1996) DOI:https://doi.org/10.1109/TASE.2008.40 

[12] B. Cantrell and A. Mekies (Eds.), Codify: Parametric and Computational Design in Landscape Architecture. (Routledge, 2018). DOI:https://doi.org/10.1017/CBO9781107415324.004 

[13] A. Al-Attili and M. Androulaki, “Architectural abstraction and representation”. In Proceedings of the 4th International Conference of the Arab Society for Computer Aided Architectural Design, Manama (Kingdom of Bahrain) (2009), 305–321. 

[14] M. Vitruvius, The Ten Books on Architecture. (Cambridge & London, UK: Harvard University Press & Oxford University Press, 1914). 

[15] K. Zhang, Visual languages and applications. (Springer Science + Business Media, 2007). 

[16] N. Shu, 1986, “Visual Programming Languages: A Perspective and a Dimensional Analysis”. In Visual Languages. Management and Information Systems, SK. Chang, T. Ichikawa and P.A Ligomenides (eds.). (Boston, MA: Springer, 1986). DOI: https://doi.org/10.1007/978-1-4613-1805-7_2 

[17] E. Do and M. Gross, “Thinking with Diagrams in Architectural Design”. Artificial Intelligence Review. 15, 1 (2001), 135–149. DOI:https://doi.org/10.1023/A:1006661524497 

[18] M. Carpo, The Alphabet and the Algorithm. (Cambridge, Massachusetts: MIT Press, 2011). 

[19] I. Caetano, G. Ilunga, C. Belém, R. Aguiar, S. Feist, F. Bastos, and A. Leitão, “Case Studies on the Integration of Algorithmic Design Processes in Traditional Design Workflows”. In Proceedings of the 23rd International Conference of the Association for Computer-Aided Architectural Design Research in Asia (CAADRIA), Hong Kong (2018), 129–138. 

[20] M. Fowler, Refactoring: Improving the Design of Existing Code. (Reading, Massachusetts: Addison-Wesley Longman, 1999) 

[21] T. Mens and T. Tourwe, “A survey of software refactoring”. IEEE Transactions on Software Engineering. 30, 2 (2004), 126–139. DOI:https://doi.org/10.1109/TSE.2004.1265817 

[22] A. Leitão, J. Lopes, and L. Santos, “Illustrated Programming”. In Proceedings of the 34th Annual Conference of the Association for Computer Aided Design in Architecture (ACADIA), Los Angeles, California, USA (2014), 291–300.  

[23] P. Alfaiate, I. Caetano, and A. Leitão, “Luna Moth Supporting Creativity in the Cloud”. In Proceedings of the 37th Annual Conference of the Association for Computer Aided Design in Architecture (ACADIA), Cambridge, MA (2017), 72–81. 

[24] M. Sammer, A. Leitão, and I. Caetano, “From Visual Input to Visual Output in Textual Programming”. In Proceedings of the 24th International Conference of the Association for Computer-Aided Architectural Design Research in Asia (CAADRIA), Wellington, New Zealand (2019), 645–654. 

[25] M. Maleki and R. Woodbury, “Programming in the Model: A new scripting interface for parametric CAD systems:”. In Proceedings of the Annual Conference of the Association for Computer Aided Design in Architecture (ACADIA), Cambridge, Canada (2013), 191–198. 

[26] R. Castelo-Branco, A. Leitão, and C. Brás, “Program Comprehension for Live Algorithmic Design in Virtual Reality”. In Companion Proceedings of the 4th International Conference on the Art, Science, and Engineering of Programming (<Programming’20> Companion), ACM, New York, NY, USA, Porto, Portugal, (2020), 69–76. DOI:https://doi.org/10.1145/3397537.3398475 

[27] L. Gimenez, J. Hippolyte, S. Robert, F. Suard, and K. Zreik, “Review: Reconstruction of 3D building information models from 2D scanned plans”. Journal of Building Engineering 2, (2015), 24–35. DOI:https://doi.org/10.1016/j.jobe.2015.04.002 

[28] P. Janssen, K. Chen, and A. Mohanty, “Automated Generation of BIM Models”. In Proceedings of the 34th Education and research in Computer Aided Architectural Design in Europe (eCAADe) Conference, Oulu, Finland, (2016) 583–590. 

[29] J. Donahue, L. Hendricks, M. Rohrbach, S. Venugopalan, S. Guadarrama, K. Saenko, and T. Darrell, “Long-Term Recurrent Convolutional Networks for Visual Recognition and Description”. IEEE Transactions on Pattern Analysis and Machine Intelligence. 39, 4 (2017), 677–691. DOI:https://doi.org/10.1109/TPAMI.2016.2599174 

[30] A. Leitão and S. Garcia., “Reverse Algorithmic Design”. In Proceedings of Design Computing and Cognition (DCC’20) Conference, Georgia, Atlanta, USA (2021). p. 317–328. DOI: https://doi.org/10.1007/978-3-030-90625-2_18 

[31] P. Mogensen and A. Riseth, “Optim: A mathematical optimization package for Julia”. Journal of Open Source Software. 3, 24 (2018), 615. DOI:https://doi.org/10.21105/joss.00615 

[32] T. Kotnik, “Digital Architectural Design as Exploration of Computable Functions”. International Journal of Architectural Computing 8, 1 (2010), 1–16. DOI:https://doi.org/10.1260/1478-0771.8.1.1 

Suggest a Tag for this Article
Figure 5 Fun Palace in London before Demolition [61] 
Figure 5 Fun Palace in London before Demolition [61] 
Architectural Authorship in “the Last Mile”
Architectural Authorship, automation, digitalisation, Fun Palace, Leon Battista Alberti, mass-customisation, the Last Mile
Yixuan Chen

y.chen.20@alumni.ucl.ac.uk
Add to Issue
Read Article: 6617 Words

Introduction 

A loyal companion to the breakthroughs of artificial intelligence is the fear of losing jobs due to a robotic takeover of the labour market. Mary L. Gray and Siddharth Suri’s research on ghost work unveiled another possible future, where a “last mile” requiring human intervention would always exist in the journey towards automation. [1] The so-called “paradox of the last mile” has been exerting impacts on the human labour market across the industrial age, recurringly re-organising itself when absorbing marginalised groups into its territory. These groups range from child labourers in factories, to the “human computer” women of NASA, to on-demand workers from Amazon Mechanical Turk (MTurk). [2] Yet their strenuous efforts are often rendered invisible behind the ostensibly neutral algorithmic form of the automation process, creating “ghost work”. [3] 

Based on this concept of “the last mile”, this study intends to excavate how its paradox has influenced architectural authorship, especially during architecture’s encounters with digital revolutions. I will firstly contextualise “architectural authorship” and “the last mile” in previous studies. Then I will discuss the (dis)entanglements between “automation” and “digitalisation”. Following Antoine Picon and Nicholas Negroponte, I distinguish between the pre-information age, information age and post-information age before locating my arguments according to these three periods. Accordingly, I will study how Leon Battista Alberti, the Fun Palace, and mass-customised houses fail in the last mile of architectural digitalisation and how these failures affect architectural authorship. From these case studies, I challenge the dominant narrative of architectural authorship, either as divinity or total dissolution. In the end, I contend that it is imperative to conceive architectural authorship as relational and call for the involvement of multi-faceted agents in this post-information age. 

Academic Context 

Architectural Authorship in the Digital Age 

The emergence of architects’ authorial status can be dated back to Alberti’s De re aedificatoria, which states that “the author’s original intentions” should be sustained throughout construction. [4] Yet at the same time, those architects should keep a distance from the construction process. [5] It not only marks the shift from the artisanal authorship of craftsmen to the intellectual authorship of architects but also begets the divide between the authorship of architectural designs and architectural end products. [6] However, this tradition can be problematic in the digital age, when multi-layered authorship becomes feasible with the advent of mass-collaboration software and digital customisation technologies. [7] 

Based on this, Antoine Picon has argued that, despite attempts to include various actors by collaborative platforms such as BIM, architects have entered the Darwinian world of competition with engineers, constructors and existing monopolies, to maintain their prerogative authorship over the profession. [8] These challenges have brought about a shifting attention in the profession, from authorship as architects to ownership as entrepreneurs. [9] Yuan and Wang, on the other hand, call for a reconciliation of architectural authorship between regional traditions and technologies from a pragmatic perspective. [10] However, these accounts did not throw off the fetters of positioning architects as the centre of analysis. In the following article, I will introduce “the last mile”, a theory from the field of automation, to provide another perspective on the issues of architectural authorship. 

“The Last Mile” as Method 

The meaning of “the last mile” has changed several times throughout history. Metaphorically, it was used to indicate the distance between the status quo and the goal, in various fields, such as movies, legal negotiations, and presidential campaigns. [11] It was first introduced in the technology industry as “the last mile” of telecommunication, on which one of the earliest traceable records was written in the late 1980s. [12] Afterwards, “the last mile” of logistics began to be widely used in the early 2000s, following the dot-com boom of the late 90s that fuelled discussions of B2C eCommerce. [13] However, in this article, I will use “the last mile” of automation, a concept from the recent “AI revolution” since 2010, to reconsider architectural authorship. [14] In this context, “the last mile” of automation refers to “the gap between what a person can do and what a computer can do”, as Gray and Suri defined in their book. [15] 

I employ this theory to discuss architectural authorship for two purposes.  

1. Understanding the paradox of automation can be of assistance in understanding how architectural authorship changes along with technological advancements. Pasquinelli and Joler suggest that “automation is a myth”, because machines have never entirely operated by themselves without human assistance, and might never do so. [16] Subsequently, here rises the paradox that “the desire to eliminate human labour always generates new tasks for humans” and this shortcoming “stretched across the industrial era”. [17] Despite being confined within the architectural profession, architectural authorship is subject to change in parallel with the alterations of labour tasks. 

2. I contend that changes in denotations of “the last mile” signal turning points in both digital and architectural history. As Figure 1 suggests, in digital history, the implication of the last mile has changed from the transmission of data to the analysis of data, and then to automation based on data. The former change was in step with the arrival of the small-data environment in the 1990s and the latter corresponds with the leap towards the big-data environment around 2010. [18] In a similar fashion, after the increasing availability of personal computers after the 90s, the digital spline in architecture found formal expression and from around 2010 onwards, spirits of interactivity and mass-collaboration began to take their root in the design profession. [19] Therefore, revisiting the digital history of architecture from the angle of “the last mile” can not only provide alternative readings of architectural authorship in the past but can also be indicative of how the future might be influenced. 

Figure 1 Changes of Meanings for “the Last Mile” in Digital History, and Digital Turns in Architectural History. 

Between Automation and Digitalisation 

Before elucidating how architectural authorship was changed by the arrival of the automated/digital age, it is imperative to distinguish two concepts mentioned in the previous section – automation and digitalisation. To begin with, although automation first came to use in the automotive industry in 1936 to describe “the automatic handling of parts”, what this phrase alludes to has long been rooted in history. [20] As Ekbia and Nardi define, automation essentially relates to labour-saving mechanisms that reduce the human burden by transferring it to machines in labour-requiring tasks, including both manual and cognitive tasks. [21] Despite its use in human history, it was not until the emergence of digital computers after WWII that its meaning became widely applicable. [22] The notion of computerised automation was put forward by computer scientist Michael Dertouzos in 1979, highlighting its potential for tailoring products on demand. [23] With respect to cognitive tasks, artificial intelligence that mimics human thinking is employed to tackle functions concerning “data processing, decision making, and organizational management”. [24] 

Digitalisation, on the other hand, is a more recent concept engendered by the society of information in the late 19th century, according to Antoine Picon. [25] This period was later referred to as the Second Industrial Revolution, when mass-production was made possible by a series of innovations, including electrical power, automobiles, and the internal combustion engine. It triggered what Beniger called the “control revolution” – the volume of data exploded to the degree that it begot revolutions in information technology. [26] Crucial to this revolution was the invention of digital computing, which brought about a paradigm shift in the information society. [27] It has changed “the DNA of information” in the sense that, as Nicholas Negroponte suggests, “all media has become digital”, by converting information from atoms to bits. [28] In this sense, Negroponte distinguishes between the information age, which is based on economics of scale, and the post-information age, founded on personalisation. [29] 

It can be observed that automation and digitalisation are intertwined in multiple ways. Firstly, had there been no advancement in automation during the Second Industrial Revolution, there would be no need to develop information technology, as data would have remained at a manageable level. Secondly, the advent of digital computers has further intermingled these two concepts to the extent that, in numerous cases, for something to be automated, it needs first to be digitalised, and vice versa. In the architectural field alone, examples of this can be found in cybernetics in architecture and planning, digital fabrication, smart materials, and so on. Hence, although these two terms are fundamentally different – most obviously, automation is affiliated with the process of input and output, and digitalisation relates to information media – the following analysis serves with no intention to differentiate between the two. Instead, I discuss “the last mile” in the context of reciprocity between these two concepts. After all, architecture itself is at the convergence point between material objects and media technologies. [30] 

Leon Battista Alberti: Before the Information Age 

Digitalisation efforts made by architects, however, appeared to come earlier than such attempts made in industrial settings of the late 19th century. This spirit can be traced back to Alberti’s insistence on identicality during information transmission, by compressing two-dimensional and three-dimensional information into digits – which is exemplified by Descriptio Urbis Romae and De statua. [31] In terms of architecture, as mentioned previously, he positions built architecture as an exact copy of architects’ intention. [32] This stance might be influenced by his views on painting. First, he maintains that all arts, including architecture, are subordinate to paintings, where “the architraves, the capitals, the bases, the columns, the pediments, and all other similar ornaments” came from. [33] Second, in his accounts, “the point is a sign” that can be seen by eyes, the line is joined by points, and the surface by lines. [34] As a result, the link between signs and architecture is established through paintings since architecture is derived from paintings and paintings from points/signs.  

Furthermore, architecture can also be built according to the given signs. In Alberti’s words, “the whole art of buildings consists in the design (lineamenti), and in the structure”, and by lineamenti, he means the ability of architects to find “proper places, determinate numbers, just proportion and beautiful order” for their constructions. [35] It can be assumed that, if buildings are to be identical to their design, then, to begin with, there must be “determinate numbers” to convey architects’ visions by digital means – such as De statua (Fig. 2). Also, in translating the design into buildings, these numbers and proportions should be unbothered by any distortions as they are placed in actual places – places studied and measured by digital means, just like Descriptio Urbis Romae (Fig. 2). 

Although the Albertian design process reflects the spirit of the mechanical age, insisting on the identicality of production, it can be argued that his pursuit of precise copying was also influenced by his pre-modern digital inventions being used to manage data. [36] Therefore, what signs/points mean to architecture for Alberti can be compared to what bits mean to information for Negroponte, as the latter is composed of the former and can be retrieved from the former. Ideally, this translation process can be achieved by means of digitalisation. 

Figure 2 Descriptio Urbis Romae (Left) and De statua (Right)37 

Yet it is obvious that the last mile for Alberti is vastly longer than that for Negroponte. As Giorgio Vasari noted in the case of Servite Church of the Annunziata, while Alberti’s drawings and models were employed for the construction of the rotunda, the result turned out to be unsatisfactory, and the arches of nine chapels are falling backwards from the tribune due to construction difficulties. [38] Also, in the loggia of the Via della Vigna Nuova, his initial plan to build semi-circular vaults was aborted because of the inability to fulfil this shape on-site. [39] These two cases suggest that the allographic design process – employing precise measurements and construction – which heralded the modern digital modelling software and 3D-printing technologies, was deeply problematic in Alberti’s time. 

This problem was recognised by Alberti himself in his De re aedificatoria, when he wrote that to be “a wise man”, one cannot stop in the middle or at the end of one’s work and say, “I wish that were otherwise”. [40] In Alberti’s opinion, this problem can be offset by making “real models of wood and other substances”, as well as by following his instruction to “examine and compute the particulars and sum of your future expense, the size, height, thickness, number”, and so on. [41] While models can be completed without being exactly precise, architectural drawings should achieve the exactness measured “by the real compartments founded upon reason”. [42] According to these descriptions, the design process conceived by Alberti can be summarised as Figure 3. 

Figure 3 Albertian Design Process 

If, as previously discussed, architecture and its context can be viewed as an assembly of points and signs, the Albertian design process can be compared to how these data are collected, analysed and judged until the process reaches the “good to print” point – the point when architects exit and construction begins. Nonetheless, what Vasari has unveiled is that the collection, analysis and execution of data can fail due to technological constraints, and this failure impedes architects from making a sensible judgement. Here, the so-called “technological constraints” are what I consider to be “the last mile” that can be found across the Albertian design process. As Vasari added, many of these technological limitations at that time were surmounted with the assistance of Salvestro Fancelli, who realised Alberti’s models and drawings, and a Florentine named Luca, who was responsible for the construction process. [43] Regardless of these efforts, Alberti remarked that only people involved in intellectual activities – especially mathematics and paintings – are architects; the opposite of craftsmen. [44] Subsequently, the challenges of confronting “the last mile” are removed from architects’ responsibilities through this ostensibly neutral design process, narrowing the scope of who is eligible to be called an architect. The marginalisation of artisanal activities, either those of model makers, draughtsmen or craftsmen, is consistent with attributing the laborious last mile of data collection, analysis and execution – measuring, model making, constructing – exclusively to their domain. 

While the division of labour is necessary for architecture, as John Ruskin argued, it would be “degraded and dishonourable” if manual work were less valued than intellectual work. [45] For this reason, Ruskin praised Gothic architecture with respect to the freedom granted to craftsmen to execute their own talents. [46] Such freedom, however, can be expected if the last mile is narrowed to the extent that, through digitalisation/automation, people can be at the same time both architects and craftsmen. Or can it? 

Fun Palace: At the Turn of the Information and Post-Information Age 

Whilst the Albertian allographic mode of designing architecture has exerted a profound impact on architectural discipline due to subsequent changes to the ways architects have been trained, from the site to the academy, this ambition of separating design from buildings was not fulfilled, or even agreed upon among architects, in the second half of the 20th century. [47] Besides, the information age on the basis of scale had limited influences on architectural history, except for bringing about a new functional area – the control room. [48] Architecture’s initial encounters with the digital revolution after Alberti’s pre-modern technologies can be traced back to the 1960s, when architects envisaged futuristic cybernetic-oriented environments. [49] Different from Alberti’s emphasis on the identicality of information – the information per se – this time, the digitalisation and information in architecture convey a rather different message. 

Gorden Pask defined cybernetics as “the field concerned with information flows in all media, including biological, mechanical, and even cosmological systems”. [50] By emphasising the flow of data – rather than the information per se – cybernetics distinguishes itself in two aspects. Firstly, it is characterised by attempts of reterritorialization – it breaks down the boundaries between biological organisms and machines, between observers and systems, and between observers, systems and their environments, during its different development phases – which are categorised respectively as first-order cybernetics (1943-1960), second-order cybernetics (1960-1985) and third-order cybernetics (1985-1996). [51]  

Secondly, while data and information became secondary to their flow, catalysed by technologies and mixed realities, cybernetics is also typified by the construction of frameworks. [52] The so-called framework was initially perceived as a classifying system for all machines, and later, after computers were made more widely available and powerful, it began to be recognised as the computational process. [53] This thinking also leads to Stephen Wolfram’s assertion that the physical reality of the whole universe is generated by the computational process and is itself a computational process. [54] This is where the fundamental difference is between the Albertian paradigm and cybernetics, as the former is based on mathematical equations and the latter attempts to understand the world as a framework/computation. [55] Briefly, in cybernetics theory, information per se is subordinate to the flow of information and this flow can again be subsumed into the framework, which is later known as computational processes (Fig. 4). 

Figure 4 Information in Cybernetics Theory 

In Cedric Price’s Fun Palace, this hierarchical order resulted in what Isozaki described as “erasing architecture into system” after its partial completion (Fig. 5). [56] Such an erasure of architecture was rooted in the conceptual process, since the cybernetics expert in charge of the Fun Palace was Gordon Pask, who founded his theory and practice on second-order cybernetics. [57] Especially so, as considering that one major feature of second-order cybernetics is what Maturana and Varela termed “allopoiesis” – a process of producing something other than the system’s original component – it is understandable that if the system is architecture, then it would generate something different than architecture. [58] In the case of the Fun Palace, it was presupposed that architecture is capable of generating social activities, and that architects can become social controllers. [59] More importantly, Cedric Price rejected all that is “designed” and instead only made sketches of indistinct elements, diagrams of forces, and functional programs, rather than architectural details. [60] All these ideas, highlighting the potential in regarding architecture as the framework of computing – in contrast to seeing architecture as information – rendered the system more pronounced and set architecture aside. 

Figure 5 Fun Palace in London before Demolition61 

By rejecting architecture as pre-designed, Price and Littlewood strived to problematize the conventional paradigm of architectural authorship. They highlighted that the first and foremost quality of the space should be its informality, and that “with informality goes flexibility”. [62] This envisages user participation by rebuking fixed interventions by architects such as permanent structures or anchored teak benches. [63] In this regard, flexibility is no longer positioned as a trait of buildings but that of use, by encouraging users to appropriate the space. [64] As a result, it delineates a scenario of “the death of the author” in which buildings are no longer viewed as objects by architects, but as bodily experiences by users – architectural authorship is shared between architects and users. [65] 

However, it would be questionable to claim the anonymity of architectural authorship – anonymous in the sense of “the death of the author” – based on an insignificant traditional architectural presence in this project, as Isozaki did. [66] To begin with, Isozaki himself has remarked that in its initial design, the Fun Palace would have been “bulky”, “heavy”, and “lacking in freedom”, indicating the deficiency of transportation and construction technologies at that time. [67] Apart from the last mile to construction, as Reyner Banham explained, if the Fun Palace’s vision of mass-participation is to be accomplished, three premises must be set – skilful technicians, computer technologies that ensure interactive experiences and programmable operations, and a secure source of electricity connecting to the state grid. [68] While the last two concerns are related to technological and infrastructural constraints, the need for technicians suggests that, despite its claim, this project is not a fully automated one. The necessary involvement of human factors to assist this supposedly automated machine can be further confirmed in Price and Littlewood’s accounts that “the movement of staff, piped services and escape routes” would be contained within “stanchions of the superstructure”. [69] Consequently, if architects can extend their authorship by translating elements of indeterminacy into architectural flexibility, and users can be involved by experiencing and appropriating the space, it would be problematic to leave the authorship of these technicians unacknowledged and confine them within service pipes. [70] 

The authorship of the Fun Palace is further complicated when the content of its program is scrutinized. Price and Littlewood envisaged that people’s activities would feed into the system, and that decisions would be made according to this information. [71] During this feed-in and feedback process, human activities would be quantified and registered in a flow chart (Fig. 6). [72] However, the hand-written proposed list of activities in Figure 6 shows that human engagement is inseparable from the ostensibly automated flow chart. The arrows and lines mask human labours that are essential for observing, recognising, and classifying human activities. These tasks are the last mile of machine learning, which still requires heavy human participation even in the early 21st century. 

For instance, when, in 2007, the artificial intelligence project ImageNet was developed to recognise and identify the main object in pictures, developers found it impossible to increase the system’s accuracy by developing AI alone (and only assisting it when it failed). [73] Finally, they improved the accuracy of ImageNet’s algorithms by finding a “gold standard” of labelling the object – not from the developments of AI itself, but by using 49,000 on-demand workers from the online outsourcing platform MTurk to perform the labelling process. [74] This example suggests that if the automation promised by the Fun Palace is to be achieved, it is likely to require more than just the involvement of architects, users, and technicians. In the time of the Fun Palace’s original conception, the attempt was not fulfilled due to the impotence of computing technologies. Yet if such an attempt was to be made in the 2020s, it is likely that architectural authorship would be shared among architects, users, technicians, and ghost workers from platforms such as MTurk. 

Figure 6 Cybernetic Diagram (Left) and Proposed Activities (Right)75 

Returning to the topic of cybernetics, whilst cybernetic theories tend to redefine territories of the architectural system by including what was previously the other parts of the system – machines, observers, adaptive environments – the example of the Fun Palace has shown that this process of blurring boundaries would not be possible without human assistance, at least initially. The flow of information between these spheres would require human interventions to make this process feasible and comprehensible because, in essence, “the information source of machine learning (whatever its name: input data, training data or just data) is always a representation of human skills, activities and behaviours, social production at large”. [76] 

Houses of Mass-Customisation: In the Post-information Age 

Although cybernetics theories have metaphorically or practically influenced architectural discourse in multiple ways, from Metabolism and Archigram to Negroponte and Cedric Price, such impact was diminished after the 1970s, in parallel with the near-total banishment of cybernetics as an independent discipline in the in the academia. [77] After a long hibernation during “the winter of artificial intelligence”, architecture’s next encounter with digital revolutions happened in the 1990s. [78] It was triggered by the increasing popularity and affordability of personal computers – contrary to the expectations of cybernetics engineers, who back in the 1960s dreamt that computers would increase both in power and size. [79] These distinctive material conditions led to the underlying difference between the second-order cybernetics in the 1960s and architecture’s first digital turn in the 1990s. I contend that this distinction can be explained by comparing Turing’s universal machine with Deleuze’s notion of the “objectile”. 

As Stanley Mathews argued, the Fun Palace works in the same way as the universal machine. [80] The latter is a precursor of modern electronic computers, which can function as different devices – either as typewriters, drawing boards, or other machines – according to different codes they receive (Fig. 7). [81] Comparatively, “objectile” connotes a situation in which a series of variant objects is produced based on their shared algorithms (Fig. 8). [82] These products are so-called “non-standard series” whose key definition relates to their variance rather than form.83  

Figure 7 Simplified Diagram of the Universal Machine 
Figure 8 Non-standard Production 

While the universal machine seems to require more power to support its every change, an infinite one-dimensional tape on which its programmers can mark symbols of any instructions to claim its universality, non-standard production can operate on a smaller scale and under less demanding environments. [84] The emphasis on variance in non-standard production processes also indicates a shift of attention from the “process” underscored by second-order cybernetics towards the product of certain parametric models. When the latter is applied to architecture, the physical building regains its significance as the variable product. 

However, it does not mean a total cut-off between cybernetics and non-standard production. Since human-machine interactions are crucial for customising according to users’ input, I maintain that mass-customisation reconnects architecture with first-order cybernetics whilst resisting the notion of chaos and complexity intrinsic in second-order cybernetics.  

Figure 9 Flatwriter85 

Such correlation can be justified by comparing two examples. First, the visionary project Flatwriter (1967) by the Hungarian architect Yona Friedman proposed a scenario in which users can choose their preferred apartment plan from several patterns of spatial configurations, locations, and orientations. [86] Based on their preferences, they would receive optimised feedback from the system (Fig. 9). [87] This optimisation process would consider issues concerning access to the building, comfortable environments, lighting, communication, and so on. [88] Given that it rejects chaos and uncertainty by adjusting users’ selections for certain patterns of order and layout, this user-computer interaction system is essentially an application of first-order cybernetics, as Yiannoudes argued. [89] Contemporary open-source architectural platforms are based on the same logic. As the founder of WikiHouse argued, since the target group of mass-customisation is the 99 per cent who are constantly overlooked by the normative production of buildings after the retreat of state intervention, designing “normal” environments for them is the primary concern – transgression and disorder should be set aside. [90] As Figure 10 illustrates, similarly to Flatwriter, in theory, WikiHouse would pre-set design rules and offer design proposals according to calculations of the parametric model. [91] These rules would follow a “LEGO-like system”, which produces designs by arranging and composing standard types or systems. [92] Both Flatwriter’s optimisation and WikiHouse’s “LEGO-like system” are pursuing design in accordance with patterns, and discouraging chaotic results. 

Figure 10 Designing Process for a WikiHouse [93

Nevertheless, neither Flatwriter nor WikiHouse has achieved what is supposed to be an automatic process of using parametric models to generate a variety of designs. For Flatwriter, the last mile of automation could be ascribed to the unavailability of computers capable of performing calculations or processing images. For WikiHouse, the project has not yet fulfilled its promise of developing algorithms for design rules that resemble how the “LEGO blocks” are organised. Specifically, in the current stage, plans, components and structures of WikiHouse are designed in SketchUp by hand. [94] The flexibility granted to users is achieved by grouping plywood lumber into components and allowing users to duplicate them (Fig. 11). Admittedly, if users are proficient in Sketchup, they could possibly customise their WikiHouse on demand – but that would then go against the promise of democratising buildings through open-source platforms. [95]  

Figure 11 SketchUp Models of WikiHouse96 

Consequently, the last mile of automation again causes a conundrum of architectural authorship. Firstly, in both cases, never mind “the death of the author”, it appears that there is no author to be identified. One can argue that it signals a democratic spirit, anonymising the once Howard Roark-style architects and substituting them with a “creative common”. Nonetheless, it must be cautioned that such substitution takes time, and during this time, architects are obliged to be involved when automation fails. To democratise buildings is not to end architects’ authorship over architecture, but conceivably, for a long time, to be what Ratti and Claudel called “choral architects”, who are at the intersection of top-down and bottom-up, orchestrating the transition from the information age of scale to the post-information age of collaboration and interactivity. [97] Although projects with similar intentions of generating design and customising housing through parametric models – such as Intelligent City and Nabr – may prove to be more mature in their algorithmic process, architects are still required to coordinate across extensive sectors – clients’ inputs, design automation, prefabrication, logistics, and construction. [98] Architectural authorship in this sense is not definitive but relational, carrying multitudes of meanings and involving multiplicities of agents. [99]  

In addition, it would be inaccurate to claim architectural authorship by the user, even though these projects all prioritise users’ opinions in the design process. By hailing first-order cybernetics while rejecting the second-order, advocating order while disapproving disorder, they risk the erasure of architectural authorship – just as those who play with LEGO do not have authorship over the brand, to extend the metaphor of the “LEGO-like system” in WikiHouse. Especially as the digital turn in terms of technology does not guarantee a cognitive turn in terms of thinking. [100] Assuming that the capitalist characteristics of production will not change, technological advancements are likely to be appropriated by corporate and state power, either by means of monopoly or censorship.  

Figure 12 Non-standard Production After Repositioning Users 

This erasure of human agency should be further elucidated in relation to the suppression of chaos in these systems. As Robin Evans explained, there are two types of methods to address chaos: (1) preventing humans from making chaos by organising humans; and (2) limiting the effects of chaotic environments by organising the system. [101] While Flatwriter and WikiHouse choose to conform according to the former at the expense of diminishing human agency, it is necessary to reinvite observers and chaos as an integral part of the system towards mass-customisation and mass-collaboration (Fig. 12). 

Conclusion 

For Walter Benjamin, “the angel of history” moves into the future with its face turned towards the past, where wreckages were piled upon wreckages. [102] For me, addressing the paradox of “the last mile” in the history of architectural digitalisation is this backward gaze that can possibly provide a different angle to look into the future.  

This article mainly discussed three moments in architectural history when technology failed to live up to the expectation of full automation/digitalisation. Such failure is where “the last mile” lies. I employ “the last mile” as a perspective to scrutinize architectural authorship in these moments of digital revolutions. Before the information age, the Albertian notational system can be regarded as one of the earliest attempts to digitalise architecture. Alberti’s insistence on the identical copying between designers’ drawings and buildings resulted in the divide between architects as intellectuals and artisans as labourers. However, this allographic mode of architectural authorship was not widely accepted even into the late 20th century.  

At the turn of the information age and post-information age, Cedric Price’s Fun Palace was another attempt made by architects to respond to the digital revolution in the post-war era. It was influenced by second-order cybernetics theories that focused on the flow of information and the computational process. Buildings were deemed only as a catalyst, and architectural authorship was shared between architects and users. Yet by examining how the Fun Palace failed in the last mile, I put forward the idea that this authorship should also be attributed to technicians and ghost workers assisting the computation processes behind the stage. 

Finally, I analysed two case studies of open-source architectural platforms established for mass-customisation. By comparing Flatwriter of the cybernetics era and WikiHouse of the post-information age, I cautioned that both systems degrade architectural authorship into emptiness, by excluding users and discouraging acts of chaos. Also, by studying how these systems fail in the last mile, I position architects as “choral architects” who mediate between the information and post-information age. Subsequently, architectural authorship in the age of mass-customisation and mass-collaboration should be regarded as relational, involving actors from multiple positions. 

References

  1. Mary L. Gray and Siddharth Suri, Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass (New York: Houghton Mifflin Harcourt Publishing Company, 2019).
  2. Gray and Suri.
  3. Gray and Suri.
  4. Mario Carpo, The Alphabet and the Algorithm (London: The MIT Press, 2011), p. 22.
  5. Carpo, The Alphabet and the Algorithm, p. 22.
  6. Carpo, The Alphabet and the Algorithm, pp. 22–23.
  7. Mario Carpo, The Second Digital Turn: Design Beyond Intelligence (Cambridge, MA: The MIT Press, 2017), pp. 131, 140.
  8. Antoine Picon, ‘From Authorship to Ownership’, Architectural Design, 86.5 (2016), pp. 39–40.
  9. Picon, ‘From Authorship to Ownership’, pp. 39 & 41.
  10. Philip F. Yuan and Xiang Wang, ‘From Theory to Praxis: Digital Tools and the New Architectural Authorship’, Architectural Design, 88.6 (2018), 94–101 (p. 101) <https://doi.org/10.1002/ad.2371>.
  11. ‘“The Last Mile” An Exciting Play’, New Leader with Which Is Combined the American Appeal, 10.18 (1930), 6; Benjamin B Ferencz, ‘Defining Aggression–The Last Mile’, Columbia Journal of Transnational Law, 12.3 (1973), 430–63; John Osborne, ‘The Last Mile’, The New Republic (Pre-1988) (Washington, 1980), 8–9.
  12. Donald F Burnside, ‘Last-Mile Communications Alternatives’, Networking Management, 1 April 1988, 57-.
  13. Mikko Punakivi, Hannu Yrjölä, and Jan Holmström, ‘Solving the Last Mile Issue: Reception Box or Delivery Box?’, International Journal of Physical Distribution and Logistics Management, 31.6 (2001), 427–39 <https://doi.org/10.1108/09600030110399423>.
  14. Gray and Suri, p. 12.
  15. Gray and Suri, p. 12.
  16. Matteo Pasquinelli and Vladan Joler, ‘The Nooscope Manifested: AI as Instrument of Knowledge Extractivism’, 2020, pp. 1–23 (p. 19) <https://doi.org/10.1007/s00146-020-01097-6>.
  17. Gray and Suri, pp. 12 & 71.
  18. Carpo, The Second Digital Turn: Design Beyond Intelligence, pp. 9, 18 & 68.
  19. Carpo, The Second Digital Turn: Design Beyond Intelligence, pp. 5, 18 & 68.
  20. James Beniger, The Control Revolution: Technological and Economic Origins of the Information Society (London: Harvard University Press, 1986), p. 295.
  21. Hamid R. Ekbia and Bonnie Nardi, Heteromation, and Other Stories of Computing and Capitalism (Cambridge, Massachusetts: The MIT Press, 2017), p. 25.
  22. [1] Ekbia and Nardi, pp. 25-6.
  23. [1] Michael L. Dertouzos, ‘Individualized Automation’, in The Computer Age: A Twenty-Year View, ed. by Michael L. Dertouzos and Joel Moses, 4th edn (Cambridge, Massachusetts: The MIT Press, 1983), p. 52.
  24. Ekbia and Nardi, p. 26.
  25. Antoine Picon, Digital Culture in Architecture : An Introduction for the Design Professions (Basel: Birkhäuser, 2010), p. 16.
  26. Beniger, p. 433.
  27. Picon, Digital Culture in Architecture : An Introduction for the Design Professions, pp. 24–26.
  28. Nicholas Negroponte, Being Digital (New York: Vintage Books, 1995), pp. 11 & 16.
  29. Negroponte, pp. 163–64.
  30. Carpo, The Alphabet and the Algorithm, p. 12.
  31. Carpo, The Alphabet and the Algorithm, pp. 54–55.
  32. Carpo, The Alphabet and the Algorithm, p. 26.
  33. Leon Battista Alberti, On Painting, trans. by Rocco SiniSgalli (Cambridge: Cambridge University Press, 2011), p. 45.
  34. Alberti, On Painting, p. 23.
  35. Leon Battista Alberti, The Ten Books of Architecture (Toronto: Dover Publications, Inc, 1986), p. 1.
  36. Carpo, The Alphabet and the Algorithm, p. 27.
  37. ‘Architectural Intentions from Vitruvius to the Renaissance’ [online] <https://f12arch531project.fil es.wordpress.com/2012/10/xproulx-4.jpg>; ‘Alberti’s Diffinitore’ http://www.thesculptorsfuneral.com /episode-04-alberti-and-de-statua/7zf3hfxtgyps12r9igveuqa788ptgj [accessed 23 April 2021].
  38. Giorgio Vasari, The Lives of the Artists, trans. by Julia Conaway & Peter Bondanella (Oxford: Oxford University Press, 1998), p. 182.
  39. Vasari, p. 181.
  40. Alberti, The Ten Books of Architecture, p. 22.
  41. Alberti, The Ten Books of Architecture, p. 22.
  42. Alberti, The Ten Books of Architecture, p. 22.
  43. Vasari, p. 183.
  44. Mary Hollingsworth, ‘The Architect in Fifteenth-Century Florence’, Art History, 7.4 (1984), 385–410 (p. 396).
  45. Adrian Forty, Words and Buildings: A Vocabulary of Modern Architecture (New York: Thames & Hudson, 2000), p. 138.
  46. Forty, p. 138.
  47. Forty, p. 137; Carpo, The Alphabet and the Algorithm, p. 78.
  48. Picon, Digital Culture in Architecture : An Introduction for the Design Professions, p. 20.
  49. Mario Carpo, ‘Myth of the Digital’, Gta Papers, 2019, 1–16 (p. 3).
  50. N. Katherine Hayles, ‘Cybernetics’, in Critical Terms for Media Stuies, ed. by W.J.T. Mitchell and Mark B.N. Hansen (Chicago and London: The University of Chicago Press, 2010), p. 145.
  51. Hayles, p. 149.
  52. Hayles, pp. 149–50.
  53. Socrates Yiannoudes, Architecture and Adaptation: From Cybernetics to Tangible Computing (New York and London: Taylor & Francis, 2016), p. 11; Hayles, p. 150.
  54. Hayles, p. 150.
  55. Stephen Wolfram, A New Kind of Science (Champaign: Wolfram Media, Inc., 2002), pp. 1, 5 & 14.
  56. Arata Isozaki, ‘Erasing Architecture into the System’, in Re: CP, ed. by Cedric Price and Hans-Ulrich Obrist (Basel: Birkhäuser, 2003), pp. 25–47 (p. 35).
  57. Yiannoudes, p. 29.
  58. Yiannoudes, p. 14.
  59. Stanley Mathews, ‘The Fun Palace as Virtual Architecture: Cedric Price and the Practices of Indeterminacy’, Journal of Architectural Education, 59.3 (2006), 39–48 (p. 43); Yiannoudes, p. 26.
  60. Isozaki, p. 34; Yiannoudes, p. 50.
  61. Stanley Mathews, p. 47.
  62. Cedric Price and Joan Littlewood, ‘The Fun Palace’, The Drama Review, 12.3 (1968), 127–34 (p. 130).
  63. Price and Littlewood, p. 130.
  64. Forty, p. 148.
  65. Jonathan Hill, Actions of Architecture (London: Routledge, 2003), pp. 68–69.
  66. Isozaki, p. 34.
  67. Isozaki, p. 35.
  68. Reyner Banham, Megastructure: Urban Futures of the Recent Past (London: Thames and Hudson, 1972).
  69. Price and Littlewood, p. 133.
  70. Forty, pp. 142-8.
  71. Yiannoudes, p. 29.
  72. Yiannoudes, p. 31.
  73. Gray and Suri, pp. 33–34.
  74. Gray and Suri, p. 34.
  75. Cedric Price, Fun Palace Project (1961-1985), <https://www.cca.qc.ca/en/archives/380477/cedric-price-fonds/396839/projects/399301/fun-palace-project#fa-obj-309847> [accessed 25 April 2021].
  76. Pasquinelli and Joler, p. 19.
  77. Yiannoudes, p. 18; Carpo, ‘Myth of the Digital’, p. 11; Hayles, p. 145.
  78. Mario Carpo, ‘Myth of the Digital’, pp. 11–13.
  79. Carpo, ‘Myth of the Digital’, p. 13.
  80. Mathews, p. 42.
  81. Yiannoudes, p. 33.
  82. Carpo, The Alphabet and the Algorithm, p. 99.
  83. Carpo, The Alphabet and the Algorithm, p. 99.
  84. Yiannoudes, p. 50.
  85. Yiannoudes, p. 30.
  86. Yiannoudes, p. 30.
  87. Yiannoudes, p. 30.
  88. Yiannoudes, p. 31.
  89. Yiannoudes, p. 31.
  90. Alastair Parvin, ‘Architecture (and the Other 99%): Open-Source Architecture and the Design Commons’, Architectural Design: The Architecture of Transgression, 226, 2013, 90–95 (p. 95).
  91. Open Systems Lab, ‘The DfMA Housing Manual’, 2019 <https://docs.google.com/document/d/1OiLXP7QJ2h4wMbdmypQByAi_fso7zWjLSdg8Lf4KvaY/edit#> [accessed 25 April 2021].
  92. Open Systems Lab.
  93. Open Systems Lab.
  94. Carlo Ratti and Matthew Claudel, ‘Open Source Gets Physical: How Digital Collaboration Technologies Became Tangible’, in Open Source Architecture (London: Thames and Hudson, 2015).
  95. Parvin.
  96. ‘An Introduction to WikiHouse Modelling’, dir. by James Hardiman, online film recording, YouTube, 5 June 2014, <https://www.youtube.com/watch?v=qB4rfM6krLc> [accessed 25 April 2021].
  97. Carlo Ratti and Matthew Claudel, ‘Building Harmonies: Toward a Choral Architect’, in Open Source Architecture (London: Thames and Hudson, 2015).
  98. Oliver David Krieg and Oliver Lang, ‘The Future of Wood: Parametric Building Platforms’, Wood Design & Building, 88 (2021), 41–44 (p. 44).
  99. Ratti and Claudel, ‘Building Harmonies: Toward a Choral Architect’.
  100. Carpo, The Second Digital Turn: Design Beyond Intelligence, p. 162.
  101. Robin Evans, ‘Towards “Anarchitecture”’, in Translations From Drawings to Building and Other Essays (从绘图到建筑物的翻译及其他文章), trans. by Liu Dongyang (Beijing: China Architecture & Building Press, 2018), p. 20.
  102. Walter Benjamin, Illuminations: Essays and Reflections (New York: Schocken Books, 2007), p. 12.

Suggest a Tag for this Article
Figure 1 - Sea of Digital Models @FONDAMENTA
Figure 1 – Sea of Digital Models @FONDAMENTA
Fondamenta
architectural language, BIM, Building Information Modelling, construction, Fondamenta, Generalist Architect
Office Fondamenta

mail@fondamenta.archi
Add to Issue
Read Article: 2397 Words

The following piece is transcribed from Fondamenta’s talk at the B-pro Open Seminar that took place at the Bartlett School of Architecture on the 8th December, 2021.

Figure 1 – Sea of Digital Models, FONDAMENTA

We are interested in the construction of spaces with a strong belief in research and experimentation, where building is the end to which architecture must strive to become itself, and technology is the tool used to reach this result. We question conventions and support contradictions; fascination for structure, and freedom from dogma are the premises of this research. Structure is the trace of space, it organises the program and generates the building. Governance through technology is the key to the creation of an architectural organism, we see our projects as opportunities to conduct research on structural systems and the use of materials. We push materials to and against their limits – we are into designing through a systematic approach, relative to structures, without forgetting that the ultimate user of this organism is the human being; we are glad to have seen four very interesting presentations. We connect with the work of Luigi Moretti a lot, who we deeply admire as an architect, being one of the first pioneers in understanding spaces as organisms, creating them with a scientific logic and having developed four precise categories to design them.

What is technology for us? It is an instrument that we face daily, we use technology to follow our purpose, and to reaffirm the central role of the Architect in the building process. Technology drives efficiency, precision and control through the entire process, allowing governance of the economy of the project. The central issue of the use of technology is always about WHO is responsible for the governance of it. We believe the answer is that the Architect should be able to take this role.

Figure 2 – Scheme showing the impact of the technological Governance of the Project, Fondamenta

Today, we don’t want to talk about specific softwares and the use we make of them but rather point out the great opportunity that a specific use of technology could give Architects today. We were trained in a university founded on Vitruvian philosophy in which Architects must have a holistic approach to Architecture, being as much generalist as possible within the field of the discipline. Over time, we have witnessed a dismantling of the so-called “Generalist Architect”, in favour of over-specialisation in specific aspects of our discipline. The Architect has been relegated to a consultant, who concurs in order to create an architectural project. Instead, we believe the Architect must be the central figure, capable of managing the complexities of today’s world, through governance of many actors and aspects. This can only be possible, in our opinion, with the aid of technology. Our last resource is to believe a generalist Architect may still exist. . .

To achieve the latter, we use existing BIM (Building Information Modelling) technology to be superimposed with our customised system. For three years we have been testing a Vocabulary of codes and protocols that are applied to BIM and that become the common “language” inside the digital model that expresses the Architectural Project, which all involved actors have to learn and share. We, as Architects, are responsible for the governance of this centralised model and system, being the one creating the laws of the digitally-organised government. We didn’t start our practice directly with this idea, it was raised as a consequence of the first project we built and the impossibility we faced to have a central role in the process. Losing power and responsibility over the process with a negative impact on the projects was the consequence. We are still working on it daily to improve it, it is an ongoing process. If we have to depict with a diagram the shift between the approach we had at the beginning, and the approach we have now, this slide expresses it [indicates screen].

The centralised system we are looking for allows different actors to interact inside a given structure, with a given language crafted by us.

Figure 3 – FONDAMENTA BIM Alphabet, Fondamenta

To get more into details, the above charts depict specific aspects of our customised Mother model. The strength of BIM is that it enables all consultants who are involved in the process to implement and add their knowledge and information inside a common, single-instance digital Model. Codes and rules were developed so as to share and communicate between the different disciplines, which belong to different worlds. The most important layer to be translated is that of economy. Each aspect of the project relates to an economical parameter that controls the cost of the projects. Starting from an existing software, we added our customised logic and vocabulary.

What we are seeing throughout our practice is that we can have control of the project from the very start. For the most part, BIM generally arises after an execution plan is in place. Instead, we deal with these premises from day zero – from concept phase – this is what makes enormous difference. Following this scheme, all actors begin to communicate at the very start, at the right time, without finding themselves in the position of compromise, but rather putting on the table all the topics that, if worked out at the right time, can surely bring the project to more radical expressions. Hence, there are incredible possibilities to push the projects to their limit, being able to build without it being jeopardised during an uncontrolled process.

We will show three different projects of ours. The first one, our first built project, is a winery in Piemonte (2018-2020).

Figure 5 – Winery Cantina dei 5 Sogni, Extract from Casabella 921 @Marco Cappelletti 
Figure 7 – Winery Cantina dei 5 Sogni, Executive drawings for Steel formwork and concrete shells geometry, FONDAMENTA and Matteo Clerici 

In this project, our awareness of technology and its potential was limited and not yet evident. That is why we run this project without using BIM to solve design and governance issues. The winery project develops research on the pursuit of a seemingly impossible balance between different structural systems, which must coexist as one organism out of concrete and steel. We designed and optimised the shell system together with our engineer, making it work as structural truss to hold the concrete pitched roof while containing part of the program. The double steel formwork of the shells, poured in one single day without pause, was directly designed, drawn and sent to the manufacturer.

After this experience, we realised that we needed more technological support to be able to control the construction process in order to push forward more projects. Particularly dealing with aspects such as economics, time and money, but also sustainability of the process. This change of guard started with the series of projects we are building in Sicily, first among all 18018EH projects of houses near Noto. From this moment, we started governing the process with the aid of BIM – our instrument – from the beginning of conception.

Figure 8 – 18018EHSR Private House, External Rendering, DIMA 

This house  is mostly underground, with only 30% of its surface exposed above ground meterage. We are trying to develop a three-dimensional project where the space develops in three axes, and all the load-bearing walls are made of local stone. The structural floor plan is created through a system of radius and circumference. Through the use of softwares, we were able to optimise the construction lines, turning them from splines to radius, working in accordance with the technical consultants to develop the BIM model. This is a snapshot showing the massive amount of information inside this model.

This is interesting because implementing information in a model is not enough to control it, there needs to be instrumental rules in order to make an architecture real. This project will be soon delivered to a construction company. Costs, money and time are essential points in our profession, in order to have the possibilities to realise our research, design cannot transcend from them. We are connected and interested in the economy of the project, which sustains architecture processes through awareness in governance and allows us to control our design according to cost.

Figure 10 – 18018EHSR Private House, Axonometry showing construction aspect and codes, FONDAMENTA 

It was incredible how we managed to control the project and design through our tools. For example, we like to show all these axonometric drawings – each code, of course, remains connected, with a clear Excel chart that reminds us of cost, quantities, and all the details that a specific part of the model has. Figuring out a way of communicating the mass of information that we were implementing in the digital model was another interesting aspect. This is something that we’re still developing to make it even more readable for the involved actors. Of course, there are just a couple of Excel spreadsheets connected to these axonometries!

Figure 11 – 18018EHSR Private House, Axonometry showing stone walls geometry and codes, FONDAMENTA 

In terms of design, we see the potential in technology as something that allows us to further push our research related to space and structure. For example, here, all the other walls will be made out of stone, blocks of stone that are one metre long, 50 centimetres high, and 30 centimetres in depth. For Grasshopper, we customised each one to come out with a sort of “abacus” of all the walls with specifications and a numbering system, then, delivered to a construction company.

This technology enables us to build within a certain amount of time. If we reflect on past projects, time is something that we really cannot negotiate – it is the hardest variable to negotiate today. Technology gives us the ability to control time more than any other aspect. We love to go back to the models, because we think that this “ping-pong” between the digital tool and the making process gives us an awareness of reality. We don’t have to lose control of what we are thinking and designing.

Figure 12 – 20027F Private House Rennovation, Axonometry showing the project strategy, FONDAMENTA 

The last aspect that we are trying to show through this house – one of the projects already into construction since four months ago – is that we reached a certain level of governance of actors during the process from the beginning. This is a renovation, where we stripped out the existing building – the partition walls – but kept working with the existing concrete cage structure. We kept the load-bearing structure, made out of concrete, and we inserted a new steel structure, changing its form but keeping the volume untouched.

Wanting it to be a precise case study, we sat with our consultants and engineers from the very beginning. All the possible actors were involved from the embryonic phase and we designed together, trying to understand immediately all the potential realistic approaches that could be achieved.

Figure 13 – 20027F Private House Rennovation, Axonometry of the BIM Model, FONDAMENTA 
Figure 14 – 20027F Private House Rennovation, Rendering, DIMA 

I’ll just show a couple of snapshots of the model that we delivered to the construction company, pointing out that it is the same model we had from the beginning. From structures, H back, to installations, every element was designed with involved actors, long before the building process started on site.

It’s really important for us to underline that Architects have to be able to see and understand consultants and potential constraints as a possibility to further the design. For us, this was not something particularly easy to understand initially, because we were trained to see consultants and all other actors as part of architecture, and came in parallel to the project. Just like the scheme we showed, they are parallel lines that, at a certain point, intertwine. In that moment, you have a connection, and this connection has to be constant. Through this system we are developing, where each actor involved in the process has to be aware of the language we share in order to achieve the project.

This is just a snapshot of the house at the moment; we’ve stripped out the partition walls and it’s just the concrete.

To conclude, BIM has a deep social impact, giving back to architecture and architects the power they should have in the process. It is then up to us to create a social resistance and approaches to contemporary society.

Suggest a Tag for this Article
disk turned steel. 1965
disk turned steel. 1965
HANS ULRICH OBRIST Interview with GETULIO ALVIANI 
discovery of light, GETULIO ALVIANI, HANS ULRICH OBRIST, immersive, raisonnée, structures
Hans Ulrich Obrist

hans-ulrich.obrist@serpentinegalleries.org
Add to Issue
Read Article: 5504 Words

10 April 2015, Milan, Miartalks

First edited transcription, Paola Nicolin 

Hans Ulrich Obrist: I would like to start right from the beginning. You told me about your uncle, but above all about the importance that Leonardo Da Vinci has always had in your work … 

Getulio Alviani: As a child, in my first years of school in Udine, the fair of Santa Caterina was held, where there were stalls with books and other things; here I came across two volumes, which I bought with the few cents I had then: one on Beato Angelico and one on Leonardo Da Vinci. I lived in the countryside back then and therefore I loved nature very much. I loved seeing birds, crickets, moles, foxes, and in this book by Leonardo there was the “bestiary.” For me, it was great, because I thought it was wonderful that a man knew all those things that I experienced daily, but that I knew absolutely nothing about. So, I fell in love with Leonardo Da Vinci, and studied his drawings in small format, because at the time there were no books with colour photographs or with enlargements. I remember a surprising thing that I always have in front of my eyes, which is how he had drawn the wind. For me, thinking that the wind could be drawn was incredible. 

From the early years of my life, I lived with two uncles, one of whom was of Austrian origin and the other born on the border with Yugoslavia. They were both over 50 years older than me, so I was always alone and surrounded only by everyday things, plants, and animals. There were those who worked as farmers, doctors, streetcleaners, carpenters … I saw them all and I wondered, for example, “who knows why someone is a carpenter?”. … I got to the point where I asked myself, “Why do I live? What am I capable of doing?” I realized then that I loved doing things with my hands, and I wanted to see. Then I began to get interested in this, and to discover, above all, that all I had in my mind were not images, but “impressions” (for example, I now look at all of you, I see you, but tomorrow I will probably not remember your faces; what I will remember is the feeling I felt, whether there was empathy or not).  

With my brain I see things; for this reason, I became interested in the world of seeing and doing, and I started by going to see, for example, how an old sculptor near my house made the plaster casts for the statues destined for the graves in the cemetery. For me, seeing was the fundamental thing: seeing and knowing – for example, that plaster becomes hot with water, that if clay dries up, it breaks – and so I began to understand what the world of doing is. I started living always like this – until I did not want to do anything anymore [he laughs], like today, where everything is distorted, distorted, and exploited, because torturers and cops have taken power. 

HUO: This idea of ​​making is very clear and we will return to it later, talking about your inventions with aluminium. But I wanted to start by imagining building your catalogue raisonnée: looking, for example, at the publications of your work, you can see that they often start with the geometric line drawings of the 1950s, and you have mentioned before the constant presence of geometry in your work. Can you tell me about these early works, these drawings that arise from the curiosity of seeing? 

GA: Mine was a series of observations, in general, but always a bit shifted. As a boy, I spent a lot of time in the studio of artisans, and then of architects – much older than me – and I went to take measurements with them and did all those things that intrigued a boy. It sometimes happened that some of them went to paint in the countryside, and painted horses, for example – even if they were actually slightly futuristic horses, like those of Marcello d’Olivo; or of Mimmo Biasi, who instead had a strong interest in vegetables, plants, which then underwent a process of abstraction. 

I have to admit that I did not know what to do, because I did not want to paint what was already there and looked perfect as it was. I wanted to catch something like the threads of light in the sky; I thought that the energy was passing in there – and I wondered how it was able to pass, because I could not see it. Then, at the time, there were the first telephone lines, so I wondered “maybe that’s how rumour travels, will the message stay the same, or be changed, and in what way?” For me, there was mystery in all this: I liked that even more, the mystery, trying to understand these things. Then I became interested in these free geometries, compositions of threads of light that crossed, intersected, overlapped – there were dozens of images in the skies of the countryside.  

However, after doing some curious work on the matter, I quit, because I thought I had exhausted the subject. I have never done things out of duty; I have done them as a game, because I have always had the pleasure of doing, of discovering, of seeing. They were, therefore, limited drawings, since I was about twenty years old at the time and everything I did was for pure pleasure. For example, in that surface [he indicates a painting from the catalogue] there is a black, but when it is hit by the light it becomes white, whiter than any other white, and this was for the light. For me, these were discoveries, thinking that the white which comes out of black is whiter than “true white.” They were conversations with matter, simple non-transcendental questions… and slowly I began to live like this.  

reflection relief with orthogonal incidence, steel. 1967, 5x480x960 cm, modules 5x80x80 cm
Figure 1 – reflection relief with orthogonal incidence, steel. 1967, 5x480x960 cm, modules 5x80x80 cm

HUO: And after this phase come the “structures.” In this, we see a lot of the world of productive work, more than the world of art. Can you tell me about this epiphany that led you to build the structures, and how you discovered aluminium? 

GA: I had participated in a competition promoted by an electrical material company in Brescia (AVE – ed.) And I had designed a valve which, compared to the previous ones, was very innovative. The prize, announced by Domus, was awarded to the architecture studio, but they told me that whoever designed the valve could go to work for the company that organized the competition, to follow the production phase. So I went to Vestone (a town in the province of Brescia – ed.), where the factory was based, and there I discovered the world of more “committed” work. Because until then, for me, the world had been one of “craftsmanship”; there instead I learned a world of “doing”, with large machines, industrial materials, and many people involved. And there among the little things, I discovered new worlds, from melamine to silver contacts, from castings to presses – because I took care of both the execution of this first project of mine, and took on the role of graphic designer for the company’s product catalogues. In this context, I found myself for the first time handling aluminium pieces coloured green, red, and yellow – which were basically mirrors. Having seen these perfect mirrors in metal was a surprising innovation. I said to myself, “but how does this mirror work?” Of course, I knew why the mirror reflected, but never had I thought about the fact that a mirror might not be able to break, or even bend.  

Then, in one of these small workshops that I attended in the province of Udine, I went to dig with some cutters under this mirror, to see what was there. Initially it was all black, with a strong smell of sulphur, but I persisted again, and then a blinding light came out, stronger than sunlight! And from there, I understood how important light was, and that this material could accelerate light, just as a lens causes the sun’s rays to burn the ground.  

HUO: You always have a lens and a measuring tape with you, right? 

GA: I have two friends, who are the greatest friends I’ve ever had in life, I always have them with me, and they are the lens and the ruler. They have never betrayed me, they are always calm, safe and make no mistake.  

HUO: This is now where we can talk about the “discovery of light”. The interesting thing is that this research does not initially enter the world of art in Italy, but instead makes a first unexpected appearance passing through Ljubljana and Zagreb. I’m interested in this passage, because when I was a student I met Julije Knifer in Sète, France, where the artist had retired in the 90s, and he talked to me a lot about the Gorgona. You, Getulio Alviani, were there, at the moment of the birth of that movement, so I would like to understand how this meeting of extraordinary characters took place. 

GA: I was very attracted to Eastern [European] countries, because I have a mania for difficult things, those things that others don’t do. Everyone can do the easy things. Going to Paris, for example, was very simple, but going to Yugoslavia was quite another story. Everything was different there, even the smell of the air.  

My motivation was due a little to the fact that these countries were representatives of Central Europe, the land that my uncle, who was born in Austria, came from, and on the other hand I was fascinated by this completely different world, then beyond the “curtain” – for example, to get a visa took months, you had to have valid reasons (which in my case were linked to family reasons, since my mother and my aunt were born in places that became Yugoslavia). The roads were different, the people as well … in short, Yugoslavia at the time was another world. Furthermore, I must admit that unlike all other parts of the world, where there was a certain atmosphere of joy and lightness, Yugoslavia was a more introverted, more reflective, more intimate, and poorer land. I like poverty a lot, because in poverty many things can be solved; while in wealth nothing is ever solved – contrary to what today’s rulers think, who aim at riches, their riches, to pretend to solve problems. Problems are solved when there is simplicity and brains, and things are done for the sake of others; while today there is a lot of imbecility combined with wickedness that only causes abuse.  

So, I landed in Slovenia. I had made two small surfaces of milled aluminium, and placed them on a radiator in a small workshop, where they were noticed by Zoran Krzisnik, who came to this workshop to have furniture made. At the time, he was the director of the GAM in Ljubljana – which was very advanced in the world; it was the first city beyond the Iron Curtain to want to do innovative things, while elsewhere the situation was very stale. So Zoran Krzisnik saw these two little things, two small plates in fact, and asked me what they were. I wasn’t sure what to tell him, so I told him how I had made them. He asked me if it was possible to make some larger ones, about one metre by one metre, and that if I could he would hold a small exhibition for a small gallery he had in Ljubljana. It was called Mala Galerija, which means precisely that: small gallery. He invited me to visit it, and then organized an exhibition. And some time later, in 1961, I made this presentation, and then learned that in the meantime Krzisnik had curated exhibitions by Zoran Mušič, Giuseppe Santomaso, artists from the Ecole de Paris, and many others. Since then, these works of mine have allowed me to live in Eastern Europe For some time. 

I have continued to have a great love for crossing the border, going beyond: Slovakia, Poland, Lithuania, up to Russia. I learned from Krzisnik that at that time, in Zagreb, there were other young people exhibiting things similar to mine during the same period. So I went to Zagreb and set out to find out what was happening, and if the work was like mine. But at Gradska Galerja I found very different pieces; they had a spirit similar to mine, yet were completely different things, and so I saw the work of Almin Mavignier, Julio Le Parc, François Morellet, Marc Adrian, Ivan Picelj, and Julije Knifer. It was the “New Trends” exhibition, organized for the first time by an artist, Almin Mavignier. There, the whole world opened up for me. Krzisnik was organizing the Biennale of graphics at the time, which was at the forefront of the world of graphics, and therefore many scholars – such as Umbro Apollonio, Giulio Carlo Argan and many others – arrived in Ljubljana. In Udine that would never have occurred; the director of the Tate, or of the Moscow museum, or Umberto Eco arriving. Instead, I met everyone there, in Ljubljana, in a moment, and that world became my second home.  

It was in this context that a young person was listened to for what he was capable of doing, which I thought could never have happened in Italy. For example, the Studentski Centar in Zagreb [The Student Center] was a large experimental centre run by artists and critics, directed by Brano Horwett. There, they invited me to create silk screen works, and so I started to print them – not even knowing what they were exactly, but obtaining surprising results of crossed, overturned, superimposed, negativized, positivized lines. Then, when I came to Milan (where the headquarters of the factory I worked for were) I was able to show this kind of research to Lucio Fontana, and then to Paolo Scheggi, and they too began to work with this technique. Then Brano Horwett came to the Galleria del Deposito to develop all these graphic techniques, which in Italy had never even been thought to exist. We were involved in the fact that serigraphy could be done in series, and everyone – Max Bill, Richard Paul Lohse, Konrad Wachsmann, Victor Vasarely – explored this field, which was born from [the East]. And this is interesting.  

cube with graphic texture opalescent pvc sheets, silkscreen and light. 1964-69, 330x330x300 cm
Figure 2 – cube with graphic texture opalescent pvc sheets, silkscreen and light. 1964-69, 330x330x300 cm

HUO: One of the important aspects in interviews is that of “protesting the forgetfulness that exists in the world”, and there is a character who is rarely talked about today but who is very important: the person who set up the exhibition. The exhibition itself is often forgotten, there is an amnesia in the art world about it. I would like it if you told us a little about Edo Kovačević and what you learned from him. 

GA: I learned everything from him. He was a figurative painter who took care of the installations in the Gradska Galerija in Zagreb; before then I had never thought that my works could be exhibited like this, suspended, supported, and so on. I thought they were simply “squares”. In fact, when I then held an exhibition of mine at Gradska, my works were about twenty “little things”, but he turned them into an eight-room exhibition, making them extraordinary – not through “effects”, as might happen today by focusing lights on them, but simply by placing one work on a background, one on a base, one as a small backdrop: and so with three surfaces, a room was set up.  

Kovačević was very simple and creative, I learned a lot from him – and, in fact, I have never had a work hung on my walls at home. I keep them in the garage, because the works have to be exhibited for a short time, otherwise the eye gets used to them and you can’t see them anymore.  

I look at the works for a short time and then put them aside, to then retrieve them months later and try to understand if they are still valid or not. My impression is that the works must be done for exhibitions, so that they communicate with each other: one must see number one, number two, and understand what they mean as one line. This is what I still do now. On the other hand, I have set up more exhibitions of my colleagues work than of mine, because in this way I really discover the works, what they are and what they represent. 

I believe that the works must be kept in the head. I have a collection of works myself, but I never see them. I got them all by making exchanges: Fontana to Bill, Lhose, Albers, Mansurof, to Nelson, Kelly or Anuszkiewicz….  

The first exchange was in the early sixties, with Fontana: he asked me for something, I brought it to him and he said to me: “What do you want [for it]?” and I replied that I did not want anything, but timidly I proposed that he give me one of his works – and so it happened immediately. From then, I received everything through exchange. This then also enabled me to hold exhibitions of those artists, because I had so many works in hand: everything was possible because I had the works, avoiding transport and all the tasks required to make an exhibition that back then seemed insurmountable.  

HUO: All of this leads to your work as a curator. Andrea Bellini, who has been talking to me about your work for many years and is the origin of my research, was insistent that we talk about you as a curator. You are “the” curator of programmed art, and you have also written a lot about your colleagues, so it would be interesting if, after Ljubljana and Zagreb, we now arrive in Italy, with the N Group, and Programmed Art.  

GA: Immediately after the exhibition with Zoran Krzisnik in that small gallery, he asked me to curate a selection of works by our group of artists for the Ljubljana Biennale. So I began to collect works by those I esteemed – because otherwise I would not have had any interest: I wondered if the artist should not exist, but only the work; if it had, as it must have, a meaning and a dignity of its own to exist. And so I curated the Ljubljana Biennale. Later, I spent many years in Venezuela, directing the Jesus Soto Museum.  

HUO: Soto told me about this abandoned museum in Ciudad Bolivar and I would be interested in understanding how an artist experiences a museum in a curatorial sense. What is your vision of that today? 

GA: Exhibitions were held, and in this way I was able to see the cities and meet those who, perhaps because of their age, would not be able to do it in the future. There was always someone who hosted me. Jesús Rafael Soto was a close friend of mine, I often went to stay with him in Paris, or with his fellow Venezuelan, Otero. One day, he told me that he intended to build a large museum, and asked me to collaborate with him by gathering all the artist friends I could. So I did: from Sérgio de Camargo to Toni Costa, to Lucio Fontana, Gianni Colombo and many other good artists. 

I could not go to the inauguration, but then, after a few years, Soto called me and told me that his museum was in ruins: “se lo comiendo el diablo” [the devil is eating it], and asked me to go and see the situation, and give him a handrestoring it. So, during a Holy Week in the 1980s, I went there and saw this museum – designed by Raul Villanueva, a good architect and friend of Le Corbusier. The museum consisted of a series of huge pavilions, located in the middle of the savannah. Unfortunately, the situation was terrible; there were bats, snakes inside, the works had been ruined and were mouldy on the walls. There were about forty people who worked there: photographers, guides … and so it was that I lived in Venezuela for four or five years and worked to completely renovate it. 

HUO: Regarding Soto, and other Venezuelan artists who work a lot on the kinetic, there is one thing we haven’t talked about yet, and that is your surfaces. At a certain point, the series of “vibrating texture” surfaces begins. In a conversation with Giacinto di Pietrantonio, you said that it would be nicer to think that “neon has chosen Flavin, mirrors Pistoletto, and aluminium has chosen me”. Why did you switch from aluminium to vibrated surfaces? 

GA: Actually, after having been the art director of an aluminium factory, I had perfect, wonderful machinery at my disposal. I’ve never had a studio; I worked where they were: if, in a particular place, there was a nice factory that produced a nice material, I went there and did something. And so, being in the aluminium industry, I had these perfect tools at my disposal. That’s how it all started. I must admit that I have always done everything by myself, because at the time everything was possible: I was alone in a factory of thousands of square metres, I was alone and I was happy; I liked doing. Today, all of this would be impossible, but back then it was natural to do whatever your brain told you to do.  

HUO: In the book New trends: Notes and memories of kinetic art by a witness and protagonist, you write that the artist “is not the cult of personality, protagonism, commercialization, private galleries, elite art, fetishism, the unique work, the social purpose, the interpretation, the metaphor, the mystification, the strategy […]”. In another text I found you say that “to be called an artist is an offense, one could always speak of artifice, of something new, but I think it is more correct to speak of a plastic creator, a designer, a student of perceptual problems, an artist is synonymous of mystifier”. I would like you to tell me about your “expanded notion of the arts”… 

GA: Since I’m a physicist, I don’t like telling stories. [I don’t like] the word “creator” … lies are “created”; they are very easy to create. To be able to say things, they ought to be verifiable, tangible. If someone tells me “on your surface the light behaves like this”, you can go and see it, and you have the opportunity to see that it is true that it behaves like this. That’s not like someone who throws a stain on the ground, and then that becomes, say, “the intolerability of social life”. They say imagined things! 

Therefore, I love things, and I care that they have the dignity to exist; as for me, I have nothing to do with it; they must have the dignity of existing. Nobody knows who invented reinforced concrete, paper, the first bricks; nobody knows anything, but these objects exist and have been made. Everything has been done, things remain and, fortunately, people leave.  

One of my favourite things is to exhibit colleagues who are better than me; partly out of gratitude, because in this way I make them continue to live, and partly because in this way they have no other influences. For example, when I started collaborating with the museum in Bratislava, an exhibition relationship that lasted about ten years, I exhibited only artists who are gone: Sonia Delaunay, Joseph Albers, Lucio Fontana, Bruno Munari, Olle Baertling, Max Bill, all of whom represented something fundamental in the art world through art, and not through words or stories. The stories may be right, but they weaken the function of the eye: we receive 90% of our information through the eye; if I had to speak what I have in front of me in the blink of an eye, I would spend years saying nothing, telling unlikely stories. On the contrary, in a split second, I see everything, and everything is verifiable. One of my passions is synthesis, so it is obvious that I love the eyes. For me the eyes are everything. 

disk turned steel. 1965
Figure 3 – disk turned steel. 1965

HUO: This is beautiful and could already be a conclusion, but I still have some urgent questions. In fact, when you talk about the synthesis of art, you make me think of Max Bill… 

GA: Max Bill has been a lot, everything, to me. We often saw each other in Zurich or Zumikon or in other parts of the world. We didn’t talk [much], we communicated with synthetic words. But when we talked, the topics were quite another thing [compared to art]. We telephoned on Sundays. I always knew, ten minutes before our call, that I was dumber than I would be afterwards – with regards to everything we talked about, his turtles, the roads, the travels, everything. Because whatever Bill told me, he opened my brain, like Vix VapoRub. He was my base, his was a total critical force, first of all towards himself: [he believed that] something that was not true had no right to exist.  

HUO: And like Max Bill, who was an artist, architect, and educator with the Ulm school, you too have continued to be a designer, architect… 

GA: Yes, but never as a profession. I have done sets, some residences, a boat, I have dealt with urban planning; but I am not a craftsman, much less able to reap any benefits that were not mental. 

HUO: You have also done graphic design, for example creating [work for] Flash Art. 

GA: [Giancarlo] Politi came to me and showed me a copy of Flash Art, which at the time was innovative because at the time there was only Selearte, a magazine that devoted very little space to modern art, just a few quotes. Giancarlo, on the other hand, had made this magazine, which in the first issue had the title in “football pools” [font]; so, from the second issue, I gave him the logo again, all in lowercase Helvetica. Throughout my life, I have made many posters, layouts, catalogues, everything that had to do with graphics.  

HUO: You started making more “immersive” installations, such as those with mirrors, and many environments, so … in a certain sense architecture and setting are synthesised in your work.  

GA: Yes. For example, in this environment [he points to a photo from the book], you literally enter the middle of the colours, but in reality they are not there, the only colours are the fixed ones of the walls. By touching the metal plates that reflect the colours, yellow becomes black, red becomes yellow and everything is mixed and the resulting images are unrepeatable. There are no engines, because I’ve never loved engines. Instead, I love that the brain sets itself in motion. 

HUO: There is also the “tunnel” which is very nice, can you tell me about this job? 

GA: Do you know, I saw this work for the first time a couple of years ago, even though it was made about twenty years ago. I went to the place with Mario Pieroni and Giacinto Di Pietrantonio and they told me that they had a series of abandoned spaces. They asked me what I would do with them, and I replied that I would make lines. I made a drawing. They then had a guy make it, who was pretty good at it.  

HUO: You told me before the conference that it’s also important to have fun, and today many artists work on games. You invented a game, in 1964, using aluminium plates, didn’t you? 

GA: It’s a very simple thing. There are two aluminium plates that rest on a surface and then there are two discs which, by reflecting, multiply. Unpredictable images can be generated, but only with the hands. And we are always surprised by what we ourselves do.  

HUO: In my interviews, I often ask what the unrealized project is. There are many categories of unrealized projects, those that are too big, utopian, censored, too expensive… which one is yours? 

GA: I must admit that my restlessness is always animated by what surrounds me. I have never had a studio, much less an assistant, as Karl Gerstner or Enzo Mari or Victor Vasarely or Julio Le Parc or François Morrellet may have … although very good, they all have had and have real businesses, but I did everything by myself – and above all, I did it … for years, and [I don’t do it] anymore because I no longer find pleasure in doing it. 

In 1970, I composed the Manifesto on the “Pneumatic” Space. You will understand that it is absurd that a bus always measures from 100 to 200 cubic metres, both when it is full of people and when it is empty, or that a car occupies 5 square metres both when it is stopped and when it is in movement. Absurd! It is a hallucinatory thing. Although I love the cars on the highways, seeing the city submerged by what I call obscene, ugly, frightening “bagnarole [bathtubs] di tin and stucco” is terrible. Cars must be in motion, because otherwise they wouldn’t be called cars, they’d be called something else. My concern, therefore, lies in trying to minimize the obstruction and presence of the cars when they are not working: this is the Tire Space. I dream that the spaces could be pneumatic, transformable, transportable from one place to another. It was the first impression I had from Konrad Wachsmann, who I attended in Genoa when he had to design the port (a project that was then given to another person in his stead). Wachsmann had an idea to make the port of Genoa expandable and shrinkable: are the boats coming? It expands. It’s empty? It shrinks. Is there no longer any need for the port? I undo it and take it elsewhere. The pneumatic world, for Wachsmann, is still to come, and I took this position a little from him. I haven’t invented anything; I use things that were already there, and I always give credit to people before me. Bill, Albers, Wachsmann, Gropius; everyone who came before me. … In this way, it is a continuation, because no [new] thing is born without another [that goes before].  

So my future is Pneumatic Space, but to achieve it you need a common will; that is, that everyone is interested. I can make drawings, I have reduced very small spaces to a minimum; you can live in 9 square metres – I have designed a living room for two people which contains everything you need and which is transformable. I like this. In the 60s, I made tables that transform, today we have to remove gravity, so we won’t even need the table anymore. Back then, the table was the solution, today we know we can remove gravity, so the table is no longer needed.  

HUO: Last question. Rainer Maria Rilke wrote that beautiful text in which he gave advice to a young poet. Today there are many young artists here with us. I am very curious to know what your advice is to a young artist in 2015.  

GA: Knowing everything that has been done. Develop intelligence, and try to do something that has the dignity of existing, or that is itself useful.  

She [the work] is the centre, you have to think about what she does: and she has her dignity only if she is not a copy, only if you have made sure that she is absolutely new. Not just for a small circle of people who may not know what is around and are amazed. Today there is a great, terrible crisis: ignorance. And here we are in the homeland of this ignorance … we buy obscene, false, ugly, stupid things. Però in fondo, anche se questa cosa qualche anno fa mi disturbava, adesso mi lascia sereno, perché vuol dire che l’ignoranza di quella gente riceve quello che si merita e qui penso proprio “all’arte”, quella che non avrei mai voluto sapere esistere 

(But in the end, even if this thing bothered me a few years ago, now it leaves me calm, because it means that the ignorance of those people receives what they deserve, and here I think about “art”, the one I never wanted to know exists.)1 

Suggest a Tag for this Article
algorithmic form, 2021
algorithmic form, 2021
Introduction to Issue 02: Algorithmic Form
Algorithmic Form, Architecture, Architecture Theory, curatorial note, Philosophy
alessandro bava

thealessandrobava@gmail.com
Add to Issue
Read Article: 641 Words

I was asked by Mollie Claypool to curate the second issue of Prospectives Journal as an ideal follow up to leading Research Cluster 0 at B-Pro in the academic year 2020/21. As such, this issue is a collection of positions that respond to my research interest during that year. 

In fact, my initial objective with RC0 was to research ways of applying computational tools to housing design for high-rise typologies: the aim was to update modernist housing standardisation derived from well-established rationalist design methodologies based on statistical reduction (such as in the work of Alexander Klein and Ernst Neufert), with the computational tools available to us now.

While the outcomes of this research were indeed interesting I was left with a sense of dissatisfaction, because it was very difficult to achieve architectural quality using purely computational tools – in a sense I felt that this attempt at upgrading modernist standardisation via computation didn’t guarantee better quality results per se, beyond merely complexifying housing typology and offering a wider variety of spatial configurations. 

In an essay I published in 2019 (which in many ways inspired the curation of this Journal), I declared my interest to be in the use of computational tools not for the sake of complexity – formal or programmatic – but for increasing architectural quality, while decrying that the positions expressed by the so-called first and second digital revolutions, at the level of aesthetics at least, seemed too invested in their own self-proclaimed novelty. My interest was in rooting them in a historical continuum, with established architectural methodologies; seeing computational design as an evolution of rationalism. 

This is why I wanted this journal to be about architectural form, and not about technical aspects of computational design: there is an urgent need to discuss design traditions connected to computational design, as an inquiry on “best practices” – that is, historical cases of what an algorithmic form has been and can be. 

Any discussion on architecture implies a twin focus, on the one hand, on the technical aspects of construction and the tools of design, and on the other, on how these are interpreted and sublimated by the artistic sensibility of an author. Ultimately, what’s interesting about architecture as the discipline of constructing the human habitat is how it is capable of producing a beautiful outcome; and in architecture, perhaps more than any other practice, the definition of beauty is collective. To be able to establish what’s beautiful, we need to develop common hermeneutic tools, which – much like in art – must be rooted in history. 

In light of this, I’m delighted with the contributions to this Journal, which offer a concise array of historical and contemporary positions that can help construct such tools. Many of the essays presented here offer a much needed insight into overlooked pioneers of algorithmic form, while others help us root contemporary positions in an historical framework – thus doing that work necessary for any serious discipline, technical or artistic, of weaving the present with the past.

My hope is that those individuals or academic institutions who are interested in how we can use emerging computational tools for architecture can re-centre their work not just on tooling and technical research but on architectural form, as the result of good old composition and proportion. The time is ripe, in my view, for bridging the gap between computational fundamentalists who believe in the primacy of code, and those with more conservative positions who foreground good form as the result of the intuition and inclination of a human author, remembering that an architectural form is only interesting if it advances the quality of life of its inhabitants and continues to evolve our collective definitions of beauty.  

algorithmic form, 2021
algorithmic form, 2021
Suggest a Tag for this Article
Collage of Isa Genzken's work
Collage of Isa Genzken’s work
The Algorithmic Form in Isa Genzken
Algorithmic Form, assemblage, attention economy, Collage, data architecture, hooks, Isa Genzken, montage, Social Architecture, social object, social science, surrealism
Provides Ng

provides.ng.19@ucl.ac.uk
Add to Issue
Read Article: 4219 Words

What’s the Hook? Social Architecture? 

Isa Genzken’s work can be seen as a synthesis of the “social” and the “object” – a visual-sculptural art that reflects on the relationship between social happenings and the scale of architectural space. She was also one of the early explorers in the use of computation for art, collaborating with scientists in the generation of algorithmic forms in the 70s. But what is the social object? What can it mean for architecture? Just as Alessandro Bava, in his “Computational Tendencies”,[1] challenged the field to look at the rhythm of architecture and the sensibility of computation, Roberto Bottazzi’s “Digital Architecture Beyond Computers”[2] gave us a signpost: the urgency is no longer about how architectural space can be digitised, but ways in which the digital space can be architecturised. Perhaps this is a good moment for us to learn from art; in how it engages itself with the many manifestations of science, while maintaining its disciplinary structural integrity. 

Within the discipline of architecture, there is an increasing amount of research that emphasises social parameters, from the use of big data in algorithmic social sciences to agent-based parametric semiology in form-finding.[3] [4] The ever-mounting proposals that promise to apply neural networks and other algorithms to [insert promising architectural / urban problem here] is evidence of a pressure for social change, but also of the urge to make full use of the readily available technologies at hand. An algorithm is “a process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer”.[5] It is a finite sequence, well-defined, with performance based on the length of code – how fast and best can we describe the most. In 1975, Gregory Chaitin’s formulation of Algorithmic Information Theory (AIT) reveals that the algorithmic form is not anymore what can be visualised on the front-end, but “the relationship between computation and information of computably generated objects, such as strings or any other data structure”.[6] In this respect, what stands at the convergence of computable form and the science of space is the algorithmic social object. 

Figure 1 – Algorithmic Social Science Research Unit (ASSRU) and Parametric Semiology – The Design of Information Rich Environments. Image source: ASSRU, Patrik Schumacher.  

Social science is the broad umbrella that encompasses disciplines from history and economics, to politics and geography; within which, sociology is a subset that studies the science of society.[7] The word ‘sociology’ is a hybrid, coined by French philosopher Isidore Auguste Comte in 1830 “from Latin socius ‘associate’ + Greek-derived suffix –logie”; more specifically, “social” as the adjective dates from the 1400s, meaning “devoted to or relating to home life”; and 1560s as “living with others”.[8] The term’s domestic connotation soon accelerated from the realm of the private to the public: “Social Contract” from translations of Rousseau in 1762; “Social Darwinism” and “Social Engineering” introduced by Fisher and Marken in 1877 and 1894; “Social Network” and “Social Media” by the late 20th century from Ted Nelson. Blooming during a high time of the Enlightenment and the rise of the positivist worldview, sociology naturally claims itself to be a science, of scientific methods and empirical investigations. The connotation of –logie has been brilliantly attested by Jonathan Culler:[9] 

Traditionally, Western philosophy has distinguished ‘reality’ from ‘appearance’, things themselves from representations of them, and thought from signs that express it. Signs or representations, in this view, are but a way to get at reality, truth, or ideas, and they should be as transparent as possible; they should not get in the way, should not affect or infect the thought or truth they represent.” 

To claim a social study as a science puts forward the question of the relationship between the language that is used to empirically describe and analyse the subject with the subject matter itself. If it should be objectively and rationally portrayed, then the language of mathematics would seem perfect for the job. If we are able to describe the interaction between two or more people using mathematics as a language, then we may begin to write down a partial differential equation and map the variables of it.[10] Algorithms that are inductively trained on evidence-based data do not only seem to capture the present state of such interaction, but seem also able to give critical information in describing the future evolution of the system. This raises the question of computability: what is the limit to social computation? If there is none, then we might as well be a simulation ourselves; so the logic goes that there must be one. To leave an algorithm running without questioning the limits to social computation is like having Borel’s monkey hitting keys at random on a typewriter, or to apply [insert promising algorithm here] arbitrarily for [insert ear-catching grand challenges here].   

Figure 2– Borel’s infinite monkey theorem in 1913. Image source: Wikipedia. 

What’s the hook? 

A hook “is a musical idea, often a short riff, passage, or phrase, that is used in popular music to make a song appealing and to catch the ear of the listener”.[11] It is a monumental part of Web 2.0 that takes user attention as a scarce resource and a valuable commodity – an attention economy. Music is an artform that takes time to comprehend; as it plays through time, it accrues value in your attention.  

Figure 3 – Drum beat to Empire State of Mind, Nick’s Drum Lessons, “‘Empire State of Mind’ Jay Z – Drum Lesson”, October 5, 2014 

This is one of the most famous hooks of the late 2000s – Empire State of Mind came around the same time as the Web 2.0 boom, just after New York had recovered from the dotcom bubble. The song was like an acoustic montage of the “Eight million stories, out there in the naked’, revealing an underlying urge for social change that was concealed by the boom; just as we see Jay-Z in Times Square on stage under the “big lights that inspired” him rapping: “City is a pity, half of y’all won’t make it”.[12] It was an epoch of R&B, rhythms of cities, of the urban sphere, of the hightech low life. Just the first 15 seconds of Jay-Z’s beat is already enough to teleport a listener to Manhattan, with every bit of romanticism that comes with it. The Rhythms and the Blues constructed a virtual space of narrative and story-telling; such spatial quality taps into the affective experiences of the listener through the ear, revealing the urban condition through its lyrical expression. It is no accident that the 2000s was also a time when the artist / sculptor Isa Genzken began exploring the potential of audio in its visual-sculptural embodiment.  

The ear is uncanny. Uncanny is what it is; double is what it can become; large [or] small is what it can make or let happen (as in laisser-faire, since the ear is the most [tender] and most open organ, the one that, as Freud reminds us, the infant cannot close); large or small as well the manner in which one may offer or lend an ear.” — Jacques Derrida.[13] 

Figure 4 – “Ohr”, Isa Genzken, since 2002, Innsbruck, City Hall facade, large format print on flag fabric, 580 x 390 cm. Photograph, galeriebuchholz 

An image of a woman’s ear was placed on a facade by Genzken, personifying the building as a listener, hearing what the city has to say. At the same time, “The body is objectified and made into a machine that processes external information”.[14] The ear also symbolises the power of voice that could fill a place with a space: an acoustic space. As much as a place is a location, geographically tagged, and affects our identity and self-association of belonging; a space can be virtual as much as it can be physical. Such a space of social interaction is now being visualised on a facade, and at the same time, it is being fragmented: “To look at a room or a landscape, I must move my eyes around from one part to another. When I hear, however, I gather sound simultaneously from all directions at once: I am at the centre of my auditory world, which envelopes me. … You can immerse yourself in hearing, in sound. There is no way to immerse yourself similarly in sight”.[15] This is perhaps a prelude to augmented virtual reality.  

Figure 5 – The Surrealist doctrine of dislocation, the romantic encounter of urban objects is “as beautiful as the chance meeting of a sewing machine and an umbrella on an operating table.” – Lautréamont, Canto VI, Chapter 3. (a) The cover of the first edition of the Rem Koolhaas’ book Delirious New York, designed by Madelon Vriesendorp. (b) A photograph of New York by Isa Genzken, New York, N.Y., 1998/2000, Courtesy Galerie Buchholz, Berlin/Cologne. (c) A photography by Man Ray 1935 © The Man Ray Trust / ADAGP, Paris and DACS, London 

As much as Genzken is interested in the ‘‘exploration of contradictions of urban life and its inherent potential for social change”, Rem Koolhaas shared a similar interest in his belief that it is not possible to live in this age if you don’t have a sense of many contradictory voices.[16] [17] What the two have in common is their continental European roots and a love for the Big Apple – Genzken titled her 1996 collage book “I Love New York, Crazy City”, and with it paid homage to her beloved city. Delirious New York was written at a time when New York was on the verge of bankruptcy, yet Koolhaas saw it as the Rosetta Stone, and analysed the city as if there had been a plan, with everything starting from a grid. It was Koolhaas’ conviction that the rigor of the grid enabled imagination, despite its authoritative nature: unlike Europe, which has many manifestos with no manifestation, New York was a city with a lot of manifestation without manifesto. 

Koolhaas’ book was written with a sense of “critical paranoia” – a surrealist approach that blends together pre-existing conditions and illusions to map the many blocks of Manhattan into a literary montage. The cover of the first edition of the book, designed by Madelon Vriesendorp, perfectly captures the surrealism of the city’s socio-economy at the time: the Art Deco skyscraper Chrysler Building is in bed with the Empire State. Both structures were vying for distinction in the “Race into the Sky” of the 1920s, fueled by American optimism, a building boom, and speculative financing. [18] Just as the French writer Lautréamont wrote: “Beautiful as the accidental encounter, on a dissecting table, of a sewing machine and an umbrella”, surrealism is a paradigmatic shift of “a new type of surprising imagery replete with disguised sexual symbolism”[19] The architectural surrealism manifested in this delirious city is the chance encounter of capital, disguised as national symbolism – an architectural hook.  

Data Architecture 

Figure 6 – China Central Television Headquarters (CCTV) and Genzken’s Gate for Amsterdam Tor für Amsterdam, Außenprojekte, Galerie Buchholz, 1988.

Genzken’s sense of scale echoes Koolhaas’ piece on “bigness” in 1995. Her proposal for the Amsterdam City Gate frames and celebrates the empty space, and found manifestation in Koolhaas’ enormous China Central Television’s (CCTV) Beijing headquarters – a building as a city, an edifice of endless air-conditioning and information circularity wrapped in a structured window skin, hugging itself in the air by its downsampled geometry of a mobius loop. Just as Koolhaas pronounced, within a world that tends to the mega, “its subtext is f*** context”. One is strongly reminded of the big data approach to form-finding, perhaps also of the discrete spatial quality coming from Cellular Automata (CA), where the resolution of interconnections and information consensus fades into oblivion, turning data processing into an intelligent, ever mounting aggregation. In the big data–infused era, the scale boundary between architecture and urban design becomes obscured. This highlights our contemporary understanding of complex systems science, where the building is not an individual object, but part of a complex fabric of socioeconomic exchanges. 

Figure 7 – The Bartlett Prospective (B-pro) Show, 2017. 

As Carpo captured in his Second Digital Turn, we are no longer living in Shannon’s age, where compression and bandwidth is of highest value: “As data storage, computational processing power, and retrieval costs diminish, many traditional technologies of data-compression are becoming obsolete … blunt information retrieval is increasingly, albeit often subliminally, replacing causality-driven, teleological historiography, and demoting all modern and traditional tools of story-building and story-telling. This major anthropological upheaval challenges our ancestral dependance on shared master-narratives of our cultures and histories”.[20] Although compression as a skillset is much used in the learning process of the machines for data models, from autoencoders to convolutional neural networks, trends in edge AI and federated learning are displacing value in bandwidth with promises of data privacy – we no longer surrender data to a central cloud, instead, all is kept on our local devices with only learnt models synchronising. 

Such displacement of belief in centralised provisions to distributed ownership is reminiscent of the big data-driven objectivist approach to spatial design, which gradually displaces our faith in anything non-discursive, such as norms, cultures, and even religion. John Lagerwey defines religion in its broadest sense as the structuring of values.[21] What values are we circulating in a socio-economy of search engines and pay-per-clicks? Within trends of data distribution, are all modes of centrally-provisioned regulation and incentivisation an invasion of privacy? Genzken’s work in urbanity is like a mirror held up high for us to reflect on our urban beliefs.  

Figure 8 – Untitled, Isa Genzken  2018, MDF, brass fixings, paper, textiles, leather, mirror foil, tape, acrylic paint, mannequin, 319.5 x 92.5 x 114 cm. David Zwirner, Hong Kong, 2021.

Genzken began architecturing a series of “columns” around the same time as her publication of I Love New York, Crazy City. Evocative of skyscrapers and skylines that are out of scale, she named each column after one of her friends, and decorated them with individual designs, sometimes of newspapers, artefacts, and ready-made items that reflect the happenings of the time. Walking amongst them reminds the audience of New York’s avenues and its urban strata, but at 1:500. Decorated with DIY store supplies, these uniform yet individuated structures seem to be documenting a history of the future of mass customization. Mass customisation is the use of “flexible computer-aided manufacturing systems to produce custom output. Such systems combine the low unit costs of mass production processes with the flexibility of individual customization”.[22] As Carpo argued, mass customisation technologies would potentially make economies-of-scale and their marginal costs irrelevant and, subsequently, the division-of-labour unnecessary, as the chain of production would be greatly distributed.[23] The potential is to democratise the privilege of customised design, but how can we ensure that such technologies would benefit social goals, and not fall into the same traps of the attention economy and its consumerism?  

Refracted and reflected in Genzken’s “Social Facades” – taped with ready-made nationalistic pallettes allusive of the semi-transparent curtain walls of corporate skyscrapers – one sees nothing but only a distorted image of the mirrored self. As the observer begins to raise their phone to take a picture of Genzken’s work, the self suddenly becomes the anomaly in this warped virtual space of heterotopia.  

Utopia is a place where everything is good; dystopia is a place where everything is bad; heterotopia is where things are different – that is, a collection whose members have few or no intelligible connections with one another.” — Walter Russell Mead [24] 

Genzken’s heterotopia delineates how the “other” is differentiated via the images that have been consumed – a post-Fordist subjectivity that fulfils itself through accelerated information consumption.  

Figure 9 – Attention economy and social strata as refracted and reflected in (a) “Soziale Fassade”, Isa Genzken, 2002, Courtesy Galerie Buchholz, Berlin/Cologne, and (b) “I shop therefore I am”, Barbara Kruger, 1987 

The Algorithmic Form 

Genzken’s engagement with and interest in architecture can be traced back to the 1970s, when she was in the middle of her dissertation at the academy.[25] She was interested in ellipses and hyperbolics, which she prefers to call “Hyperbolo”.[26] The 70s were a time when a computer was a machine that filled the whole room, and to which a normal person would not have access. Genzken got in touch with a physicist, computer scientist Ralph Krotz, who, in 1976, helped in the calculation of the ellipse with a computer, and plotted the draft of a drawing with a drum plotter that prints on continuous paper.[27] Artists saw the meaning in such algorithmic form differently than scientists. For Krotz, ellipses are conic sections. Colloquially speaking, an egg comes pretty close to an ellipsoid: it is composed of a hemisphere and half an ellipse. If we are to generalise the concept of conic section, hyperbolas also belong to it: if one rotates a hyperbola around an axis, a hyperboloid is formed. Here, the algorithmic form is being rationalised to its computational production, irrelevant of its semantics – that is, until it was physically produced and touched the ground of the cultural institution of a museum. 

The 10-meter long ellipse drawing was delivered full size, in one piece, as a template to a carpenter, who then converted it to his own template for craftsmanship. Thus, 50 years ago, Genzken’s work explored the two levels of outsourcing structure symbolic of today’s digital architectural production. The output of such exploration is a visual-sculptural object of an algorithmic form at such an elongated scale and extreme proportion that it undermines not only human agency in its conception, but also the sensorial perception of 2D-3D space.[28] When contemplating Genzken’s Hyperbolo, one is often reminded of the radical play with vanishing points in Hans Holbein’s “The Ambassadors”, where the anamorphic skull can only be viewed at an oblique angle, a metaphor for the way one can begin to appreciate the transience of life only with an acute change of perspective.  

Figure 10. (a) ‘The Ambassadors’, Hans Holbein, 1533. (b) “Hyperbolos”, Genzken, 1970s. Image source: Andrea Albarelli, Mousse Magazine

When situated in a different context, next to Genzken’s aircraft windows (“Windows”), the Hyperbolo finds association with other streamlined objects, like missiles. Perhaps the question of life and death, paralleling scientific advancement, is a latent meaning and surrealist touch within Genzken’s work, revealing how the invention of the apparatus is, at the same time, the invention of its causal accidents. As the French cultural theorist and urbanist Paul Virilio puts it: the invention of the car is simultaneously the invention of the car crash.[29] We may be able to compute the car as a streamlined object, but we are not even close to being able to compute the car as a socio-cultural technology.  

Figure 11 – Genzken holding her “Hyperbolos” in 1982, and “Windows”. Eichler , Dominic. “This Is Hardcore.” Frieze, 2014.

Social Architecture? 

Perhaps the problem is not so much whether the “social” is computable, but rather that we are trying to objectively rationalise something that is intrinsically social. This is not to say that scientific methods to social architecture are in vain; rather the opposite, that science and its language should act as socioeconomic drivers to changes in architectural production. What is architecture? It can be described as what stands at the intersection of art and science – the art of the chief ‘arkhi-’ and the science of craft ‘tekton’ – but the chance encounter of the two gives birth to more than their bare sum. If architecture is neither art nor science but an emergence of its own faculty, it should be able to argue for itself academically as a discipline, with a language crafted as its own, and to debate itself on its own ground – beyond the commercial realm that touches base with ground constraints and reality of physical manifestation, and also in its unique way of researching and speculating, not all “heads in the clouds”, but in fact revealing pre-existing socioeconomic conditions.  

It is only through understanding ourselves as a discipline that we can begin to really grasp ways of contributing to a social change, beyond endlessly feeding machines with data and hoping it will either validate or invalidate our ready-made and ear-catching hypothesis. As Carpo beautifully put it:  

Reasoning works just fine in plenty of cases. Computational simulation and optimization (today often enacted via even more sophisticated devices, like cellular automata or agent-based systems) are powerful, effective, and perfectly functional tools. Predicated as they are on the inner workings and logic of today’s computation, which they exploit in full, they allow us to expand the ambit of the physical stuff we make in many new and exciting ways. But while computers do not need theories, we do. We should not try to imitate the iterative methods of the computational toolds we use because we can never hope to replicate their speed. Hence the strategy I advocated in this book: each to its trade; let’s keep for us what we do best.” [30] 

References

1 A. Bava, “Computational Tendencies – Architecture – e-Flux.” Computational Tendencies, January. 2020. https://www.e-flux.com/architecture/intelligence/310405/computational-tendencies/.

2 R. Bottazzi, Digital Architecture beyond Computers Fragments of a Cultural History of
Computational Design (London: Bloomsbury Visual Arts, 2020).

3 ASSRU, Algorithmic Social Sciences, http://www.assru.org/index.html. (Accessed December 18, 2021)

4 P. Schumacher, Design of Information Rich Environments, 2012.
https://www.patrikschumacher.com/Texts/Design%20of%20Information%20Rich%20Environments.html.

5 Oxford, “The Home of Language Data” Oxford Languages, https://languages.oup.com/ (Accessed December 18, 2021).

6 Google, “Algorithmic Information Theory – Google Arts & Culture”, Google,
https://artsandculture.google.com/entity/algorithmic-information-theory/m085cq_?hl=en. (Accessed December 18, 2021).

7 Britannica, “Sociology”, Encyclopædia Britannica, inc. https://www.britannica.com/topic/sociology. (Accessed December 18, 2021).

8 Etymonline, “Etymonline – Online Etymology Dictionary”, Etymology dictionary: Definition, meaning and word origins, https://www.etymonline.com/, (Accessed December 18, 2021).

9 J. Culler, Literary Theory: A Very Short Introduction, (Oxford: Oxford University Press, 1997).

10 K. Friston, ”The free-energy principle: a unified brain theory?“ Nature reviews neuroscience, 11 (2),127-138. (2010)

11 J. Covach, “Form in Rock Music: A Primer” (2005), in D. Stein (ed.), Engaging Music: Essays in Music Analysis. (New York: Oxford University Press), 71.

12 Jay-Z. Empire State Of Mind, (2009) Roc Nation, Atlantic

13 J. Derrida, The Ear of the Other: Otobiography, Transference, Translation ; Texts and Discussions with Jacques Derrida. Otobiographies / Jacques Derrida, (Lincoln, Neb.: Univ. of Nebraska Pr., 1985).

15 Kunsthalle Wien, “Kunsthalle Wien #FemaleFool Booklet I’m Isa Genzken the …,” (2014). https://kunsthallewien.at/101/wp-content/uploads/2020/01/booklet_i-m-isa-genzken-the-only-female-fool.pdf?x90478.

16 W. Ong, Orality and Literacy: The Technologizing of the Word, (London: Methuen, 1982)

17 R. Koolhaas, New York délire: Un Manifeste rétroactif Pour Manhattan, (Paris: Chêne, 1978).

18 Kunsthalle Wien, “Kunsthalle Wien #FemaleFool Booklet I’m Isa Genzken the …,” (2014). https://kunsthallewien.at/101/wp-content/uploads/2020/01/booklet_i-m-isa-genzken-the-only-female-fool.pdf?x90478.

19 J. Rasenberger, High Steel: The Daring Men Who Built the World’s Greatest Skyline, 1881 to the Present, (HarperCollins, 2009)

20 Tate, “’L’Enigme D’Isidore Ducasse’, Man Ray, 1920, Remade 1972”, Tate. https://www.tate.org.uk/art/artworks/man-ray-lenigme-disidore-ducasse-t07957, (Accessed December 18, 2021)

21 M. Carpo, ”Big Data and the End of History”. International Journal for Digital Art History, 3: Digital Space and Architecture, 3, 21 (2018)

22 J. Lagerwey, Paradigm Shifts in Early and Modern Chinese Religion a History, (Boston, Leiden: Brill, 2018).

23 Google, “Mass Customization – Google Arts & Culture.” Google, https://artsandculture.google.com/entity/mass-customization/m01k6c4?hl=en (Accessed December 18, 2021).

24 M. Carpo, The Second Digital Turn: Design beyond Intelligence, (Cambridge: MIT, 2017).

25 W.R. Mead, (Winter 1995–1996). “Trains, Planes, and Automobiles: The End of the Postmodern Moment”. World Policy Journal. 12 (4), 13–31

26 U. Loock, “Ellipsoide und Hyperboloide”, in Isa Genzken. Sesam, öffne dich!, exhibition cat. (Whitechapel Gallery, London, and Museum Ludwig, Cologne: Kasper, 2009)

27 S. Baier, “Out of sight”, in Isa Genzken – Works from 1973-1983, Kunstmuseum

28 R. Krotz, H. G. Bock, “Isa Genzken”, in exhibition cat. Documenta 7, Kassel 1982, vol. 1, p. 330-331, vol. 2, p. 128-129

29 A. Farquharson, “What Architecture Isn’t” in Alex Farquharson, Diedrich Diederichsen and Sabine Breitwieser, Isa Genzken (London 2006), 33

30 P. Virilio, Speed and Politics: An Essay on Dromology (New York: Columbia University, 1986).

Suggest a Tag for this Article
Sebastiano Serlio, Livre Extraordinaire de Architecture [...] (Lyon: Jean de Tournes, 1551), plate 18, detail
Sebastiano Serlio, Livre Extraordinaire de Architecture […] (Lyon: Jean de Tournes, 1551), plate 18, detail
Citations, Method, and the Archaeology of Collage *
algorithm, alphabet, architectural language, Citations, Collage, Method, pomo, post modern, Renaissance, shape Grammar
Mario Carpo

m.carpo@ucl.ac.uk
Add to Issue
Read Article: 3651 Words

But let us not have recourse to books for principles which may be found within ourselves. What have we to do with the idle disputes of philosophers concerning virtue and happiness? Let us rather employ that time in being virtuous and happy which others waste in fruitless enquiries after the means: let us rather imitate great examples, than busy ourselves with systems and opinions.  … For this reason, my lovely scholar, changing my precepts into examples, I shall give you no other definitions of virtue than the pictures of virtuous men; nor other rules for writing well, than books which are well written.  

Jean-Jacques Rousseau, Julie ou la Nouvelle Héloïse, Letter XII (William Kenrick transl., 1784)  

Children learn to speak their mother tongues through practice and observation. They don’t need grammar rules. Grammar comes later, when it is taught at school. This shows that we may know a language without knowing its grammar. Grammar is an artificial shortcut to fluency, replacing the lengthy process of learning from life. For a fifteen-year-old high school student struggling to learn German, grammar is indispensable. Yet plenty of native German speakers don’t know declensions by heart and still manage to get their word endings right – in speech as much as in writing.

At a higher level of linguistic practice, literary composition too used to have its own rules – rules that were taught at school. Until the end of the nineteenth century rhetoric was a compulsory subject in most European secondary schools. Rhetoric is the science of discourse. It teaches how to find the arguments of speech, how to arrange them in an orderly manner, and how to dress them with words. Rhetoric teaches how to be clear and persuasive. Seen in this light, rhetoric would seem to be a necessary discipline – indispensable, even. Instead, it no longer features in school and university curricula. France stopped teaching rhetoric in 1885, when French lycées replaced it with the history of classic and modern literature. Nineteenth-century educators seemed to have concluded that, when learning to write, we are better off in the company of literary masterpieces, rather than engaged in the normative study of classical (or modern) rhetoric. A century after Rousseau, Julie-Héloïse’s pedagogical programme quoted above became law.

In times gone by students would have learnt the art of discourse by systematically studying grammar and rhetoric – page after page of rules to be learnt by heart. Today high school students in all European countries are instead obliged to read the masterpieces of their respective national literatures, often ad nauseam. This evidently follows from the assumption that, by reading and re-reading these exemplary works, students will (at some point) learn to write as beautifully as these canonical authors once did. Never mind that nobody knows precisely how and when that almost magic transference, assimilation, and transmutation of talent might occur: grammar has almost completely disappeared from primary school teaching, and rhetoric barely features in higher education – now an intellectual fossil of sorts. Meanwhile, the old art of discourse tacitly lingers on, in business schools, in creative writing and marketing classes. Especially in the latter, the ancient forensic discipline is returned to one of its ancestral functions: that of persuading, even when in the wrong.

For the Humanists of the Quattrocento, the first language to learn was Latin. Not Medieval Latin of course – a corrupt and barbaric but still living language. Renaissance Humanists wanted to speak in the tongue of classical antiquity; they wanted to learn Cicero’s Latin. But Cicero’s Latin is, by definition, a dead language: quite literally so, since it died with Cicero. Cicero also wrote manuals on the art of rhetoric, but the Humanists believed that the best way to learn to write like Cicero was by imitating his way of writing. Well before the Romantics and the Moderns, they found learning from rules unappealing. They preferred to copy the style of Cicero from examples of his work.

The Humanists’ veneration of examples was not limited to languages. Their exemplarism was an épistémè – an intellectual, cultural and social paradigm, deeply inscribed within the spirit of their time. That was their rebellion against the world they grew up in. For centuries the Scholastic tradition had privileged formalism, deductive reasoning, and syllogistic demonstration. The Humanists rejected this “barbarous”, “Gothic” tradition of logic, in favour of their new way of “learning from examples”. The dry and abstract rules of medieval Scholasticism were difficult to handle. Examples, on the other hand, were concrete and tangible. Imitating an example was easier, more pleasurable, and allowed more room for creativity than merely applying rules. This is how, at the dawn of modernity, antiquity was turned from a rule book into an art gallery.

*** *** ***

Like the arts of discourse, the arts of building require schooling. At the height of the Middle Ages, when both Gothic architecture and Scholasticism were at their peak, architectural lore was the preserve of guilds, and its mostly oral transmission was regulated by secretive initiation practices. By contrast, the Humanists pursued a more open strategy – reviving the ancient custom of writing books on building. The first modern treatise, Alberti’s De Re Aedificatoria, deals with the architecture of antiquity, but the structure of Alberti’s discourse was still medieval and Scholastic. Alberti advocates classical architecture as a paragon for all modern building, but Alberti’s antiquity was an abstract model, devoid of any material, visible incarnation. Rather than an atlas of classical buildings, Alberti’s book offers a set of classical design rules – rules for building in the classical way. To put it in more contemporary terms, Alberti formalized classical architecture. Alberti’s rules replace the need to see – let alone imitate – the monuments of classical antiquity. To avoid all misunderstanding, Alberti’s book did not describe any actual ancient monument, either in writing or visually: Alberti’s De Re Aedificatoria originally did not include any illustrations, and Alberti explained that he wanted it that way.

As a commercial venture, Alberti’s De Re Aedificatoria was not a success. Renaissance architects found it easier to skip Alberti’s writings altogether, and go see, touch and learn from the extant magnificence of Roman ruins in person. Moreover, and crucially, as of the early sixteenth century drawings of ancient monuments started to be sold and circulated throughout Europe. Survey drawings in particular, for the first time made available through print, made the laborious ekphrastic and normative mediation of Alberti’s writings all but unnecessary. But models, if beautiful to behold, are not always easy to imitate. Copies will inevitably be more or less successful, depending on the individual talent of each practitioner. By the second or third decade of the sixteenth century imitation itself had become a pedagogic and didactic conundrum.

Not just architectural imitation: writers had the same problem. After all, imitating Cicero is easier said than done. Many rhetoricians in the sixteenth century will strive to transform the practice, skills, and tacit knowledge of literary imitation into a rational, transmissible technique. The modern notion of “method” was born out of sixteenth century rhetoric, but sixteenth century authors were not trying to develop a (scientific) method for making new discoveries; they were trying to develop a (pedagogic) method to better organise and teach what they already knew. Their post-Scholastic, pre-scientific method was essentially a diairetic method – a method of division: all knowledge, they argued, can be partitioned into smaller and smaller units, easier to learn, remember and work with. For sixteenth century scholars, “method” still meant “short cut” – a short cut to knowledge.

Discourse itself can be divided into modular parts: prefaces, arguments, conclusions, formulas and figures, idioms or turns of phrase, sentences, syntagms, words and letters. Sixteenth-century rhetoricians used this divisive technique to invent a new method for literary imitation. On the face of it, Cicero’s style may appear as an ineffable quintessence, but at the end of the day all writing is text, and every text can be broken down into a linear sequence of alphabetical units. Of course, breaking up a text is not a straightforward operation: the parts of speech are held together by syntactic, semantic, and functional relationships. Some of these links can be uncoupled. Others can’t. A text is a heteroclitic, variable cohesion aggregate of parts. Its segments differ in both extension and complexity. Yet even the most sophisticated literary monument can be subdivided into fragments; and once a fragment has been set apart from its compositional context, it can also be reused, reassembled, or recomposed into another text.

In reducing the art of discourse to a citationist technique – by turning ancient texts into a repository of infinitely repeatable citations – sixteenth century rhetoricians invented a new rhetoric. Ancient and modern texts came to be seen as mechanical assemblages of parts. Ancient works could be decomposed into segments, and these segments could then be reassembled to form new works. The smaller the segments, the more fluid or freer the outcome. Ciceronian Latin was an extraordinarily sophisticated and effective instrument of communication, but some modern ideas fundamentally differed from those of Cicero. The citationist method of imitation allowed Renaissance authors to use an old language to express new ideas.

Renaissance architects also needed a rational method for producing modern buildings while imitating classical examples. The greatest structures of antiquity – temples, amphitheatres, thermal baths – were of no use to modernity. Temples, in particular, while representing the pinnacle of classical architecture, had been built to house rituals and represent heathen gods whose worship had long ceased. The entire language of classical architecture had to be adapted for typologies and functions that had no precedents in antiquity. The image of antiquity itself as a building that can be endlessly dismantled and reassembled was a commonplace in the Renaissance. It was also a common practice on many building sites. Architect Sebastiano Serlio would turn this practice into a design theory.

That was no accident. Giulio Camillo, one of the main theorists of the sixteenth century citationist method, had an interest in architecture. He was also a friend of Serlio. The two were supported by the same patrons, and moved in the same circles of Evangelical (and perhaps Nicodemite) inclination. The method of Giulio Camillo’s Neoplatonist rhetoric is well known:

1. Appropriate ancient examples (literary or otherwise) must be selected. The criteria for this selection were a much-disputed matter at the time, and one on which Camillo himself did not dwell.

2. The resulting corpus of integral textual sources must be segmented or divided into parts according to functional or syntactical criteria.

3. This catalogue of dissolved fragments must be sorted, so new users know where to look for the fragments they need.

4. A modern writer (a composer, but also in a sense a compositor: an ideal type-setter) will pick, reassemble and merge, somehow, any number of chosen textual fragments.

Thus new ideas could be expressed through ancient words and phrases – fragments severed from their original context, yet validated by prior use by a recognised “authority”. In Camillo’s view, this compositional technique constituted the inner workings and the secret formula of all processes of imitation. Furthermore, this was a compositional method that could be taught and learnt.

One essential tool in implementing this pedagogical programme was Camillo’s notorious Memory Theatre, a walk-in filing cabinet where all the textual sources (and possibly some of the fragments deriving from them) would have been sorted following Camillo’s own classification system. The whole machine, which included an ingenious information retrieval device, would have been in the shape of an ancient theatre – and it appears that Camillo built at least a wooden model or mock-up of it, in the hope (soon dashed) of selling his precociously cybernetic technology to King Francis I of France.

In a long-lost manuscript (found and published only in 1983) Camillo also explains how the same principles can inform a new method for architectural design. In Camillo’s Neoplatonic hierarchy of ideas, the heavenly logos descends down into reality following seven steps or degrees of ideality. Individuals inhabit the seventh (lowest, sublunar) step; their ascent and crossing of the lunar sky occurs by dint of their separation from the accidents of space and time. In the case of architecture, actual buildings as they exist on earth must be separated from their site to become ideas of the lowest (sixth) grade. This separation of the real from its worldly context results in something similar to what we would today call “building types” – which are buildings in full, except they do not inhabit any given place. These abstract types are then further subdivided into columns and orders (of the five kinds then known: Tuscan, Doric, Ionic, Corinthian, and Composite). The five orders are then broken down into regular geometric volumes, then surfaces, all the way to Euclidian points and lines. On each grade or step, a catalogue of ready-made parts would offer any designer all the components needed to assemble a new building. Thus Camillo’s design method doubles as a shortcut to architectural imitation, and as a universal assembly kit.

A more scholarly trained Neoplatonist philosopher (and a few existed in Camillo’s time) would have objected to some of Camillo’s brutal simplifications, and could have pointed out that his theory had severe epistemic flaws. All the same, Camillo’s architectural method (which its first editor, Lina Bolzoni, dated to around 1530) is almost identical to the plan laid out by Serlio in the introduction to the first instalment of his architectural treatise, published in Venice in 1537. Some of Serlio’s seven grades did not correspond to Camillo’s order: most notably, his atlas of archaeological evidence, the base and foundation of Camillo’s Neoplatonic scaffolding, should have been on the lowest step, but was instead printed as Serlio’s Third Book (likely for commercial reasons). Additionally, one of the seven books in Serlio’s original plan, his revolutionary Sixth Book, on Dwellings for all Grades of Men, was written but never published – at least, not until 1966. Serlio also wrote an additional, Extraordinary Book (literally, a book out of the original order) – a cruel, sombre joke disguised as a book, which Serlio bequeathed to posterity shortly before dying, poor and dejected in his self-imposed French exile.

Regardless of some factual discrepancies, Serlio’s compositional method is ostensibly the same as Camillo’s. Architecture’s exemplary models are selected, and then fragmented. These fragments are sorted and classified at different levels or grades of dissolution. Instructions for their reassembly are then provided, together with examples of successful new compositions. The pivot of the whole system was the book on the five architectural orders, which Serlio published first (albeit titled Fourth Book to comply with the general plan): a catalogue of stand-alone constructive parts (columns, capitals, bases, entablatures and mouldings), destined for identical reproduction in print, in scaled drawings, and in buildings of any type. In Serlio’s method, this was the main offspring of architectural “dissolution” (or disassembling), and the basic ingredient of architectural design, i.e. re-composition. Pagan idols had to be broken down; only their fragments could be used, purified ingredients in the building of a new Christian architecture.

All the way, Serlio was aware of, and attuned to, the purpose and limits of his architectural method. Serlio turned architectural design into an assemblage of ready-made modular components. These were not actual spolia, but compositional design units, part to a universal combinatory grammar and destined for identical replication. Giulio Camillo’s rhetoric reduced the imitation of Cicero’s style, hence all literary composition, to a cut-and-paste method of collage and citation. Serlio’s treatise did the same for architecture. His theory of the orders was the keystone of the entire process. Serlio couldn’t standardise the building site (that would have made no sense in the sixteenth century), but he could standardise architectural drawings and design.

Serlio knew full well that his simplified, almost mechanical approach to design would entail a decline in the general quality of architecture. Many critics across the centuries have indeed frowned at the models and projects shown in his Seven Books. Serlio’s designs have often been seen as repetitive, banal, ungainly or chunky; lacking in inspiration and genius. But Serlio did not write for geniuses. His treatise was a pedagogical work, not an architectural one. As Serlio tirelessly reminds the reader, his method is tailored to “every mediocre”: to the “mediocre architect” – the average, middling designer. Today we might say that Serlio’s treatise aimed at creating an intermediate class of building professionals. Michelangelo and Raphael had no need for “a brief and easy method” that turned architectural invention into cut-and-paste, collage and citation.

Knowledge can be taught, not genius. Serlio’s pedagogical structure and design method were parts of an overarching ideological project. Serlio’s method promises uniform and predictable architectural standards. These are perhaps banal, or monotonous, but that’s the price one pays to make “architecture easy for everyone”. And it is a price Serlio was willing to pay. Serlio’s concern was the average quality of building, not the artistic value of a few outstanding monuments. This was a most unusual choice for an artist of the Italian Renaissance – an iconoclastic, almost revolutionary stance. Serlio’s worldview was not one in which the misery of the many was contrasted by the magnificence of a few. Serlio pursued the uniform, slightly boring repetitiveness of a productive, “mediocre” multitude. This was an ideological project, but also a social project, ripened in the cultural context of the early protestant Reformation. It is a position that evokes and preludes well-known categories of modernity.

Sebastiano Serlio, Livre Extraordinaire de Architecture [...] (Lyon: Jean de Tournes, 1551), plate 18.
Sebastiano Serlio, Livre Extraordinaire de Architecture […] (Lyon: Jean de Tournes, 1551), plate 18.

* Footnote to this translation

This is a translation of the introduction to my book Metodo e Ordini nella Teoria Architettonica dei Primi Moderni (Geneva: Droz, Travaux d’Humanisme et Renaissance, 1993), edited, abridged, and adapted for clarity, but not updated. That book in turn derived from my PhD dissertation, supervised by Joseph Rykwert, researched and written between 1984 and 1989, and defended in the spring of 1990. Heavily influenced by Françoise Choay’s La Règle et le Modèle and by works of literary criticism by Terence Cave (The Cornucopian Text), Antoine Compagnon (La seconde main ou le travail de la citation), and Marc Fumaroli (L’âge de l’éloquence), all published between 1979 and 1980, my enquiry on the use of visual citations in Renaissance architectural design was evidently in the spirit of the time: post-modern architects in the 80s were passionate about citations (or the recycling of precedent, otherwise known as reference, allusion, collage and cut-and-paste); they were equally devoted to architectural history, and particularly to the history of Renaissance classicism. My aim then was to bridge the gap between those two sources of PoMo inspiration, showing that Renaissance architecture was itself, quintessentially, citationist. How could it have been otherwise, since the main purpose of Renaissance architects was to revive, literally, the buildings of classical antiquity – piece by piece? Thanks to the first studies of Lina Bolzoni on the sulphurous Renaissance philosopher and magician Giulio Camillo, and to my then girlfriend, who was studying Renaissance Neoplatonism (and is today a known specialist of that arcane science), I soon found evidence of an extraordinary link – biographical, ideological, and theoretical – between Giulio Camillo and Sebastiano Serlio, and I wrote a PhD dissertation to explain the transference of the citationist method from Bembo’s Prose to Camillo’s Theatre to Serlio’s Seven Books – and ultimately to Serlio’s architecture.

Unfortunately, in the process, I also found out that the citationist method in the 16th century was a tool and vector of modernity. It was a mechanical method, made to measure for the new technology of printing; it was also in many ways a harbinger of the scientific revolution that would soon follow. Besides, the citationist method was more frequently adopted by Evangelical and Protestant thinkers (particularly Calvinist), and it was condemned by the Counter-Reformation. None of this would have pleased the PoMo architects and theoreticians who were then my main interlocutors.

Fortunately for me, they never found out. When my book was published, in 1993, the tide of PoMo citationism was already receding. Investigating the sources of citationism was no longer an urgent matter for architects and designers. My book was published in Italian, in an austere collection of Renaissance studies – few architects would have known about it, let alone read it. It received some brutally disparaging reviews, as due, by some of Tafuri’s acolytes, because they thought, without reading my book, or misreading it, that I was bringing water to the PoMo mill. I wasn’t. But at that point that was irrelevant. We had all already moved on.

I was pleasantly surprised when, a few years ago, Jack Self commissioned this translation for publication in Real Review (the translation, by Fabrizio Ballabio, was soon thereafter partially republished in Scroope, the journal of the Cambridge School of Architecture, at the request of Yasmina Chami and Savia Palate); and I was of course more than happy when my colleague Alessandro Bava asked me to review it for publication in the B-Pro journal of Bartlett School of Architecture. As we all know, collage and citation are becoming trendy again in some architectural circles – for reasons quite different from those of the late structuralists and early PoMos that were my mentors when I was a student. I have somewhat mixed feelings about the current, post-digital revival of collaging, but I would be happy to restart a discussion we briefly adjourned a generation ago.

Mario Carpo (March 2022)

Publication history:

Metodo e Ordini nella Teoria Architettonica dei Primi Moderni. Alberti, Raffaello, Serlio e Camillo (Geneva: Droz, 1993). 226 pages. Travaux d’Humanisme et Renaissance, 271

“Citations, Method, and the Archaeology of Collage”. Real Review, 7 (2018): 22-30, transl. by Fabrizio Ballabio and by the author; partly republished in Scroope, Cambridge Architectural Journal, 28 (2019): 112-119

Suggest a Tag for this Article
Mereologies
Daniel Koehler, 2020
Introduction to Issue 01: Mereologies
25/10/2020
Architecture, Architecture Theory, Discrete Architecture, Mereologies, Mereology, Philosophy
Daniel Koehler
University of Texas at Austin
daniel.koehler@utexas.edu
Add to Issue
Read Article: 1572 Words

Part relationships play an important role in architecture. Whether an aspect of a Classical order, a harmonious joining of building components, a representation of space, a partition of spaces, or as a body that separates us and identifies us as individuals. From the very outset, every form of architecture begins with an idea of how parts come together to become a whole and an understanding of how this whole relates to other parts. Architecture first composes a space as a part of a partitioning process well before defining a purpose, and before using any geometry.

The sheer performance of today’s computational power makes it possible to form a world without a whole, without any third party or a third object. Ubiquitous computing fosters peer-to-peer or better part-to-part exchange. It is not surprising then that today’s sharing represents an unfamiliar kind of partiality. From distributive manufacturing to the Internet of Things, new concepts of sharing promise systematic shifts, from mass-customisation to mass-individualisation: the computational enabled participations are foundational. It is no longer the performance or mode of an algorithm that drives change but its participatory capacities. From counting links, to likes, to seats, to rooms: tools for sharing have become omnipresent in our everyday lives. Thus, that which is common is no longer negotiated but computed. New codes – not laws or ideologies – are transforming our cities at a rapid pace, but what kind of parthood is being described? How does one describe something only through its parts today? To what extent do the automated processes of sharing differ from the partitioning of physical space? How can we add, intervene and design such parts through architecture?

The relationship between parts and their whole is called Mereology. In this issue of Prospectives, mereology’s theories and the specifics of part-relations are explored. The differences between parts and the whole, the sharing of machines and their aesthetics, the differences between distributive and collective, their ethical commitments, and the possibilities of building mereologies are discussed in the included articles and interviews.

Just as mereology describes objects from their parts, this issue is partial. It is not a holistic proposal, but a collection of positions. Between philosophy, computation, ecology and architecture, the texts are reminders that mereologies have always been part of architecture. Mereology is broadly a domain that deals with compositional possibilities, relationships between parts. Such an umbrella – analogue to morphology, typology, or topology – is still missing in architecture. Design strategies that depart part-to-part or peer-to-peer are uncommon in architecture, also because there is (almost) no literature that explores these topics for architectural design. This issue hopes to make the extra-disciplinary knowledge of mereology accessible to architects and designers, but also wishes to identify links between distributive approaches in computation, cultural thought and built space.

The contributions gathered here were informed by research and discussions in the Bartlett Prospectives (B-Pro) at The Bartlett School of Architecture, UCL London from 2016 to 2019, culminating in an Open Seminar on mereologies which took place on 24 April 2019 as part of the Prospectives Lecture Series. The contributions are intended as a vehicle to inject foundational topics such as mereology into architectural design discourse.

The Contributions

This compilation starts with Giorgio Lando’s text “Mereology and Structure”. Lando introduces what mereology is for philosophers, and why philosophers discuss mereological theses, as well as disagree one with another about them. His text focuses in particular on the role of structure in mereology outlining that from a formal point of view part relations are freed from structure. He argues that independence from structure might be the identifying link between mereology and architecture. The second article “From Partitioning to Partaking” is a plea for re-thinking the city. Daniel Koehler’s essay points to the differences between virtual and real parts. Koehler observes a new spatial practice of virtual representations that render previous models of urban governance obsolete. He argues that the hyper-dimensional spaces of a big data-driven economy demand a shift from a partitioning practice of governance to more distributed forms of urban design. In “Matter versus Parts: The Immaterialist Basis of Architectural Part-Thinking” Jordi Vivaldi Piera highlights the revival of matter in parallel to the discrete turn in contemporary discourses on experimental architecture. The essay gravitates around the notion of part-thinking in association with the notion of form. Fluctuating between continuous and discrete, the text sets out requirements for radical part-thinking in architecture. As a computational sociologist, David Rozas illustrates the potential of decentralised technologies for democratic processes at the scale of neighborhood communities. After an introduction to models of distributed computation, “Affordances of Decentralised Technologies for Commons-based Governance of Shared Technical Infrastructure” draws analogies to Elinor Ostrom’s principles of commons governance and how those can be computationally translated, turning community governance into fully decentralised autonomous organisations.

Departing from the Corbusian notion of a ‘machine for living’, Sheghaf Abo Saleh defines a machine for thinking. In “When Architecture Thinks! Architectural Compositions as a Mode of Thinking in the Digital Age” Abo Saleh states that the tectonics of a machine that thinks is brutal and rough. As a computational dialogue, she shows how roughness can enable posthumanism which, in her case, turns “tempered” parts into a well-tempered environment. Ziming He’s entry point for “The Ultimate Parts” is the notion of form as the relations between parts and wholes. He’s essay sorts architectural history through a mereological analysis, proposing a new model of part-to-part without wholes. Shivang Bansal’s “Towards a Sympoietic Architecture: Codividual Sympoiesis as an Architectural Model” investigates the potential of sympoiesis. By extending Donna Haraway‘s argument of “tentacular thinking” into architecture, the text shifts focus from object-oriented thinking to parts. Bansal argues for the limits of autopoiesis as a system and conceptualises spatial expressions of sympoiesis as a necessity for an adaptive and networked existence through “continued complex interactions” among parts.

Merging aspects of ‘collective’ and ‘individuality,’ in “Codividual Architecture within Decentralised Autonomous System” Hao Chen Huang proposes a new spatial characteristic that she coins as the “codividual”. Through an architectural analysis of individual and shared building precedents, Huang identifies aspects of buildings that merge shared and private features into physical form. Anthony Alviraz’s paper “Computation Within Codividual Architecture” investigates the history and outlook of computational models into architecture. From discrete to distributed computation, Alviraz speculates on the implications of physical computation where physics interactions overcome the limits of automata thinking. InSynthesizing Hyperumwelten”, Anna Galika transposes the eco-philosophical concept of an HyperObject into a “Hyperumwelt”. While the Hyperobject is a closed whole that cannot be altered, a Hyperumwelt is an open whole that uses objects as its parts. The multiple of a Hyperumwelt offers a shift from one object’s design towards the impact of multiple objects within an environment.

Challenging the notion of discreteness and parts, Peter Eisenman asks in the interview “Big Data and the End of Architecture Being Distant from Power” for a definition of the cultural role of the mereological project. Pointing to close readings of postmodern architecture that were accelerated by the digital project, Eisenman highlights that the demand for a close reading is distanced from the mainstream of power. The discussion asks: ultimately, what can an architecture of mereology critique? The works of Herman Hertzberger are an immense resource on part-thinking. In the interview “Friendly Architecture: In the Footsteps of Structuralism”, Herman Hertzberger explains his principle of accommodation. When building parts turn into accommodating devices, buildings turn into open systems for staging ambiguity.**

The issue concludes with a transcript from the round table discussion at the Mereologies Open Seminar at The Bartlett School of Architecture on 24 April 2019.

Acknowledgments

The contributions evolved within the framework of Bartlett Prospectives (B-Pro) at The Bartlett School of Architecture, UCL. I want to thank Frédéric Migayrou for his vision, commitment and long years of building up a research program, not only by architecture but through computation. I would like to thank Roberto Bottazzi for the years of co-organising the Prospectives Lecture Series, where plenty of the discussions that form the backbone of this issue took place. Thanks to Mario Carpo for raising the right question at the right time for so many people within the program, thanks to Andrew Porter for enabling so many events, to Gilles Retsin, for without the discrete there are no parts, Mollie Claypool for the editing and development of Prospectives journal, and Vera Buehlmann, Luciana Parisi, Alisa Andrasek, Keller Easterling, Matthew Fuller, John Frazer, Philippe Morel, Ludger Hovestadt, Emmanuelle Chiappone-Piriou, Jose Sanchez, Casey Rehm, Tyson Hosmer, and Jordi Vivaldi Piera for discussions and insights. 

I want to thank Rasa Navasaityte, my partner in Research Cluster 17 at B-Pro, for driving the design research. Thank you for the research contributed by the researchers and tutors: Christoph Zimmel, Ziming He, Anqi Su, Sheghaf Abo Saleh, and to all participants, specifically to highlight: Genmao Li, Zixuan Wang, Chen Chen, Qiming Li, Anna Galika, Silu Meng, Ruohan Xu, Junyi Bai, Qiuru Pu, Anthony Alviraz, Shivang Bansal, Hao-Chen Huang, Dongxin Mei, Peiwen Zhan, Mengshi Fu, Ren Wang, Leyla El Sayed Hussein, Zhaoyue Zhang, Yao Chen, and Guangyan Zhu.

The issue includes articles that evolved from thesis reports conducted in the following clusters: Ziming He from Research Cluster 3 tutored by Tyson Hosmer, David Reeves, Octavian Gheorghiu, and Jordi Vivaldi in architecture theory. Sheghaf Abo Saleh, Anthony Alvidrez, Shivang Bansal, Anna Galika, Hao Chen Huang from Research Cluster 17 tutored by Daniel Koehler and Rasa Navasaityte. If not indicated directly, the featured images, graphics of this issue are by Daniel Koehler, 2020.

Suggest a Tag for this Article
Penrose block simulation allowing objects to interact with on a large scale and within three dimensions, forming a single whole object. Image: Anthony Alvidrez, Large City Architecture, RC17, The Bartlett School of Architecture, UCL, 2018.
Architectural Computation Within Codividual Architecture
Architecture, City Architecture, Composition, Computational Design, Mereologies, Mereology, Urban Design
Anthony Alvidrez
University College London
dan1alvidrez@gmail.com
Add to Issue
Read Article: 2080 Words

The design research presented here aims to develop a design methodology that can compute an architecture that participates within the new digital economy. As technology advances, the world needs to quickly adapt to each new advancement. Since the turn of the last century, technology has integrated itself within our everyday lives and deeply impacted the way in which we live. This relationship has been defined by TM Tsai et al. as “Online to Offline” or “O2O” for short.[1] What O2O means is defining virtually while executing physically, such as platform-based companies like Uber, AirBnb, and Groupon do. O2O allows for impact or disruption of the physical world to be made within the digital world. This has significantly affected economies around the world. 

Paul Mason outlined in Post Capitalism: A Guide to our Future (2015) that developments in technology and the rise of the internet have created a decline in capitalism, which is being replaced by a new socio-economic system called “Post Capitalism”. As Mason describes,“technologies we’ve created are not compatible with capitalism […] once capitalism can no longer adapt to technological change”.[2] Traditional capitalism is being replaced by the digital economy, changing the way products are produced, sold and purchased. There is a new type of good which can be bought or sold: the digital product. Digital products can be copied, downloaded and moved an infinite number of times. Mason states that it is almost impossible to produce a digital product through a capitalist economy due to the nature of the digital product. An example he uses is a program or software that can be changed throughout time and copied with little to no cost.[3] The original producer of the product cannot regain their cost as one can with a physical good, leading to traditional manufacturers losing income from digital products. With the increase in digital products, the economy must be adapted. 

In The Second Digital Turn (2017) Mario Carpo describes this phenomenon, stating that digital technologies are creating a new economy where production and transactions are done entirely algorithmically, and as a result are no longer time-consuming, labour intensive or costly. This leads to an economy which is constantly changing and adapting to the current status of the context in which it is in. Carpo describes the benefits of the digital economy as the following: “[…] it would appear that digital tools may help us to recreate some degree of the organic, spontaneous adaptivity that allowed traditional societies to function, albeit messily by our standards, before the rise of modern task specialisation.”[4]

Computational Machines

It is useful to look at the work of Kurt Gödel and his theorems for mathematical logic, which are the basis for computational logic. In his first theorem the term “axioms” is presented, which are true statements that can be proven as true. The theorem states that “If axioms do not contradict each other and are ‘listable’ some statements are true but cannot be proved.”[5] This means that any system based on mathematical statements, axioms, cannot prove everything unless additional axioms are added to the list. From this Gödel describes his second theorem, “A system of axioms cannot show its inconsistency.”[6] To relate this to programming, axioms can be seen as similar to code, yet everything cannot be proven from a single system of code. 

Allen Turing’s work on computable numbers is a result of these two theorems by Gödel. Turing was designing a rigorous notion of effective computability based on the “Turing Machine”. The Turing Machine was to process any given information based on a set of rules, or a programme the machine follows, provided by the user for a specified intention. The machine is fed with an infinitely long tape, divided into squares, which contains a sequence of information. The machine would “scan” a symbol, “read” the given rules, “write” an output symbol, and then move to the next symbol. As Turning described, the “read” process refers back to the rule set provided: the machine would look through the rules, find the scanned symbol, then proceed to follow the instructions of the scanned symbol. The machine then writes a new symbol and moves to a new location, repeating the process over and over until it is told to by the ruleset to halt or stop the procedure and deliver an output.[7] Turing’s theories laid down the foundation for the idea of a programmable machine able to interpret given information based on a given programme. 

When applying computational thinking to architecture, it becomes evident that a problem based in the physical requires a type of physical computation. By examining the work of John von Neumann in comparison with Lionel Sharples Penrose the difference between the idea of a physical computational machine and a traditional automata computation can be explored. In Arthur W. Burks’s essay ‘Von Neumann’s Self-Reproducing Automata’ (1969) he describes von Neumann’s idea of automata, or the way in which computers think and the logic to how they process data. Von Neumann developed simple computer automata that functioned on simple switches of “and”, “or”, and “not”, in order to explore how automata can be created that are similar to natural automata, like cells and a cellular nervous system, making the process highly organic and with it the ability to compute using physical elements and physical data. Von Neumann theorised of a kinetic computational machine that would contain more elements than the standard automata, functioning in a simulated environment. As Burks describes, the elements are “floating on the surface, […] moving back and forth in random motion, after the manner of molecules of a gas.”[8] As Burks states, von Neumann utilised this for “the control, organisational, programming, and logical aspects of both man-made automata […] and natural systems.”[9] 

However this poses issues around difficulty of control, as the set of rules are simple but incomplete. To address this von Neumann experimented with the idea of cellular automata. Within cellular automata he constructs a series of grids that act as a framework for events to take place, or a finite list of states in which the cell can be. Each cell’s state has a relation to its neighbours. As states change in each cell, this affects the states of each cell’s neighbour.[10] This form of automata constructs itself entirely on a gridded and highly strict logical system.

Von Neumann’s concept for kinetic computation was modelled on experiments done by Lionel Sharples Penrose in 1957. Penrose experimented with the intention of understanding how DNA and cells self-replicate. He built physical machines that connected using hooks, slots and notches. Once connected the machines would act as a single entity, moving together forming more connections and creating a larger whole. Penrose experimented with multiple types of designs for these machines. He began with creating a single shape from wood, with notches at both ends and an angled base, allowing the object to rock on each side. He placed these objects along a rail, and by moving the rail forwards and backwards the objects interacted, and, at certain moments, connected. He designed another object with two identical hooks facing in opposite directions on a hinge. As one object would move into another, the hook would move up and interlock with a notch in the other element. This also allowed for the objects to be separated. If three of these objects were joined, and a fourth interlocked at the end, the objects would split into two equal parts. This enabled Penrose to create a machine which would self-assemble, then when it was too large, it would divide, replicating the behaviours of cellular mitosis.[11] These early physical computing machines would operate entirely on kinetic behaviour, encoding behaviours within the design of the machine itself, transmitting data physically.   

Experimenting with Penrose: Physical Computation

The images included here are of design research into taking Penrose objects into a physics engine and testing them at a larger scale. By modifying the elements to work within multiple dimensions, certain patterns and groupings can be achieved which were not accessible to Penrose. Small changes to an element, as well as other elements in the field, affect each other in terms of how they connect and form different types of clusters. 

Figure 1 – Modified Penrose object simulation testing how individual objects interact and join together, forming patterns and connections through fusion. Image: Anthony Alvidrez, Large City Architecture, RC17, The Bartlett School of Architecture, UCL, 2018.

In Figure X, there is a spiralling hook. Within the simulations the element can grow in size, occupying more area. It is also given a positive or negative rotation. The size of the growth represents larger architectural elements, and thus takes more of the given space within the field. This leads to a higher density of elements clustering. The rotation of the spin provides control over what particular elements will hook together. Positive and positive rotations will hook, as well as negative and negative ones, but opposite spins will repeal each other as they spin.

Figure 2 – Penrose block simulation allowing objects to interact with on a large scale and within three dimensions, forming a single whole object. Image: Anthony Alvidrez, Large City Architecture, RC17, The Bartlett School of Architecture, UCL, 2018.

Through testing different scenarios, formations begin to emerge, continuously adapting as each object is moving. At a larger scale, how the elements will interact with each other can be planned for spatially. In larger simulations certain groupings can be combined together to create larger formations of elements connected through strings of hooked elements. This experimentation leads towards a new form of architecture referred to as “codividual architecture”, or a computable architectural space created using the interaction and continuous adaptation of spatial elements. The computation of space occurs when individual spaces fuse together, therefore becoming one new space indistinguishable from the original parts. This process continues, allowing codividual architecture of constant change and adaptability.

Codividual Automata

Codividual spaces can be further supported by utilising machine learning, which computes parts at the moment they fuse with other parts, the connection of spaces, the spaces that change, and how parts act as a single element once fused together. This leads to almost scaleless spatial types of infinite variations. Architectural elements move in a given field and through encoded functions – connect, move, change and fuse. In contrast to what von Neumann was proposing, where the elements move randomly similar to gaseous molecules, these elements can move and join based on an encoded set of rules.

Figure 3 – Codividual architecture using machine learning. Image: COMATA, Anthony Alvidrez, Hazel Huang, and Shivang Bansal, Large City Architecture, RC17, The Bartlett School of Architecture, UCL, 2019.

Within this type of system that merges together principles of von Neumann’s automata with codividuality, traditional automata and state machines can be radically rethought by giving architectural elements the capacity for decision making by using machine learning. The elements follow a set of given instructions but also have additional knowledge allowing them to assess the environment in which they are placed. Early experiments, shown here in images of the thesis project COMATA, consisted of orthogonal elements that varied in scale, creating larger programmatic spaces that were designed to create overlaps, and interlock, with the movement of the element. The design allowed for the elements to create a higher density of clustering when they would interlock in comparison to a linear, end-to-end connection.

Figure 4 – Barcelona super block simulation. Image: COMATA, Anthony Alvidrez, Anthony Alvidrez, Hazel Huang, and Shivang Bansal, Large City Architecture, RC17, The Bartlett School of Architecture, UCL, 2019.

This approach offers a design methodology which takes into consideration not only the internal programme, structure and navigation of elements, but the environmental factors of where they are placed. Scale is undefined and unbounded: each part can be added to create new parts, with each new part created as the scale grows. Systems adapt to the contexts in they are placed, creating a continuous changing of space, allowing for an understanding of the digital economics of space in real time.

References

[1] T. M. Tsai, P. C. Yang, W. N. Wang, “Pilot Study toward Realizing Social Effect in O2O Commerce Services,” eds. Jatowt A. et al., Social Informatics, 8238 (2013).

[2] P. Mason, Postcapitalism: A Guide to Our Future, (Penguin Books, 2016), xiii.

[3] Ibid, 163.

[4] M. Carpo, The Second Digital Turn: Design Beyond Intelligence (Cambridge, Massachusetts: MIT Press, 2017), 154.

[5] P. Millican, Hilbert, Gödel, and Turing [Online] (2019), http://www.philocomp.net/computing/hilbert.htm, last accessed May 2 2019.

[6] Ibid.

[7] A. Turing, “On Computable Numbers, with an Application to the Entscheidungsproblem,” Proceedings of the London Mathematical Society, 1, 2-42, (1937), 231-232.

[8] A. W. Burks, Von Neumann's Self-Reproducing Automata; Technical Report (Ann Arbor: The University of Michigan, 1969), 1.

[9] A. W. Burks, Essay on Cellular Automata, Technical Report (Urbana: The University of Illinois Press, 1970), 5.

[10] A. W. Burks, Essay on Cellular Automata, Technical Report (Urbana: The University of Illinois Press, 1970), 7-8.

[11] L. S. Penrose, “Self-Reproducing Machines,” Scientific American, 200 (1959), 105-114.

Suggest a Tag for this Article
Mereology, WanderYards, Genmao Li, Chen Chen and Xixuan Wang, RC17, MArch Urban Design, The Bartlett School of Architecture, UCL, 2017.
From Partitioning to Partaking, or Why Mereologies Matter
Architecture, Building, Digital, Digital Architecture, Discrete Architecture, Mereologies, Mereology, Participatory Design, Virtual
Daniel Koehler
University of Texas at Austin
daniel.koehler@utexas.edu
Add to Issue
Read Article: 5596 Words

Parts, chunks, stacks and aggregates are the bits of computational architecture today. Why do mereologies – or buildings designed from part-to-whole – matter? All too classical, the roughness of parts seems nostalgic for a project of the digital that aims for dissolving building parts towards a virtual whole. Yet if parts shrink down to computable particles and matter, and there exists a hyper-resolution of a close to an infinite number of building parts, architecture would dissolve its boundaries and the capacity to frame social encounters. Within fluidity, and without the capacity to separate, architecture would not be an instrument of control. Ultimately, freed from matter, the virtual would transcend from the real and form finally would be dead. Therein is the prospect of a fluid, virtual whole.

The Claustrophobia of a City that Transcends its Architecture

In the acceleration from Data to Big Data, cities have become more and more virtual. Massive databases have liquefied urban form. Virtual communication today plays freely across the material boundaries of our cities. In its most rudimentary form virtuality is within the digital transactions of numbers, interests and rents. Until a few years ago,  financial investments in architectural form were equatable according to size and audience, e.g. as owner-occupied flats, as privately rented houses or as lease holding.[1] Today capital flows freely scatter across the city at the scale of the single luxury apartment. Beyond a certain threshold in computational access, data becomes big. By computing aggregated phone signal patterns or geotagged posts, virtual cities can emerge from the traces of individuals. These hyperlocal patterns are more representative of a city than its physical twin. Until recently, architecture staged the urban through shared physical forms: the sidewalk, lane or boulevard. Adjacent to cars, walkable for pedestrians or together as citizens, each form of being urban included an ideology of a commons, and grounded with that particular parts of encountering.

Figure 1 – (left to right) Floor area comparisons between housing projects from the Brutalist era (top) and today (bottom): Previ, Atelier 5 vs Seguro, Kerez La Sainte-Baume, Le Corbusier vs The Mountain, BIG; La Muralla Roja Calpe, Bofill vs Communal Villa, Dogma. Image: Daniel Koehler.

In contrast, a hyper-local urban transcends lanes and sidewalks. Detached from the architecture of the city, with no belonging left, urban speculation has withdrawn into the private sphere. Today, urban value is estimated by counting private belongings only, with claustrophobic consequences. An apartment that is speculatively invested displaces residents. The housing shortage in the big cities today is not so much a problem of lack of housing, but instead of vacant space, accessible not to residents but to interests they hold in the hyper-urban.[2] The profit from rent and use of space itself is marginal compared to the profit an embodied urban speculation adds to the property. The possibility of mapping every single home as data not only adds interest, like a pension to a home but literally turns a home into a pension.[3] However this is not for its residents but for those with access to resources. Currently, computing Big Data expands and optimises stakeholders’ portfolios by identifying undervalued building assets.[4] However, the notion of ‘undervalued’ is not an accurate representation of assets.

Hyper-localities increase real estate’s value in terms of how their inhabitants thrive in a neighbourhood through their encounters with one another and their surrounding architecture. The residents themselves then unknowingly produce extra value. The undervaluing of an asset is the product of its residents, and like housework, is unpaid labour. In terms of the exchange of capital, additional revenue from a property is usually paid out as a return to the shareholders who invested in its value. Putting big data-driven real estate into that equation would then mean that they would have to pay revenues to their residents. If properties create surplus value from the data generated by their residents, then property without its residents has less worth and is indeed over-, but not under-, valued.

Figure 2 – (left to right) City in a Building, City as a Building and City as an Element of Architecture. Image: University of Innsbruck, Daniel Koehler with Martin Danigel and Jordi Vivaldi, 2016-2018.

The city uses vehicles for creating public revenue by governing the width of a street’s section or the height of a building. Architecture’s role was to provide a stage for that revenue to be created. For example the Seagram Building (van der Rohe, Johnson, 1958) created a “public” plaza by setting back its envelope in exchange for a little extra height. By limiting form, architecture could create space for not only one voice, but many voices. Today, however, the city’s new parameters hidden in the fluidity of digital traces cannot be governed by the boundaries of architecture anymore. Outlined already 40 years ago, when the personal computer became available, Gilles Deleuze forecasted that “Man is not anymore man enclosed”.[5] At that time, and written as a “Postscript on the Societies of Control”, the fluid modulation of space prospected a desirable proposition. By liquefying enclosures, the framework of the disciplinary societies of Foucault’s writings would disappear. In modern industrial societies, Deleuze writes, enclosures were moulds for casting distinct environments, and in these vessels, individuals became masses of the mass society.[6] For example, inside a factory, individuals were cast as workers, inside schools as students. Man without a cast and without an enclosure seemed to be freed from class and struggle. The freedom of an individual was interlinked with their transcendence from physical enclosures.

Figure 3 – The Hyper-Nollie Plan, Daniel Koehler, 2019. Image: Daniel Koehler, 2019.

During the last forty years, the relation between a single individual and the interior framed architecture rightly aimed to dissolve the institutional forms of enclosures that represented social exclusion at their exterior. Yet, in this ambition alternative forms for the plural condition of what it means to be part of a city were not developed. Reading Deleuze further, a state without enclosures also does not put an end to history. The enclosures of control dissolve only to be replaced. Capitalism would shift to another mode of production. When industrial exchange bought raw materials and sold finished products, now it would buy the finished products and profit from the assemblies of those parts. The enclosure is then exchanged with codes that mark access to information. Individuals would not be moulded into masses but considered as individuals: accessed as data, divided into proper parts for markets, “counted by a computer that tracks each person’s position enabling universal modulation.”[7] Forty years in, Deleuze’s postscript has become the screenplay for today’s reality.

Hyper-parts: Spatial Practices of representations

A house is no longer just a neutral space, an enclosing interior where value is created, realised and shared. A home is the product of social labour; it is itself the object of production and, consequently, the creation of surplus value. By shifting from enclosure to asset, the big data-driven economy has also replaced the project behind modernism: humanism. Architecture today is post-human. As Rosi Braidotti writes, “what constitutes capital value  today is the informational power of living matter itself”.[8] The human being as a whole is displaced from the centre of architecture. Only parts of it, such as its “immanent capacities to form surplus-value”, are parts of a larger aggregation of architecture. Beyond the human, the Hyper-city transcends the humane. A virtual city is freed from its institutions and constituent forms of governance. Economists such as Thomas Piketty describe in painstaking detail how data-driven financial flows undermine common processes of governance, whether urban, regional, or national, in both speed and scale. Their analysis shows that property transactions shelled in virtual value-creation-bonds are opaque to taxation. Transcending regulatory forms of governance, one can observe the increase of inequalities on a global scale. Comparable to the extreme wealth accumulation at the end of the nineteenth century, Piketty identifies similar neo-proprietarian conditions today, seeing the economy shifting into a new state he coins as “hypercapitalism”.[9] From Timothy Morton’s “hyper-objects” to hypercapitalism,  hyper replaces the Kantian notion of transcendence. It expresses not the absorption of objects into humanism, but its withdrawal. In contrast to transcendence, which subordinates things to man’s will, the hyper accentuates the despair of the partial worlds of parts – in the case of Morton in a given object and in the case of Piketty in a constructed ecology.

When a fully automated architecture emerged, objects oriented towards themselves, and non-human programs began to refuse the organs of the human body. Just as the proportions of a data center are no longer walkable, the human eye can no longer look out of a plus-energy window, because it tempers the house, but not its user. These moments are hyper-parts: when objects no longer transcend into the virtual but despair in physical space. More and more, with increasing computational performance, following the acronym O2O (from online to offline),[10] virtual value machines articulate physical space. Hyper-parts place spatial requirements. A prominent example is Katerra, the unicorn start-up promising to take over building construction using full automation. In its first year of running factories, Katerra advertises that it will build 125,000 mid-rise units in the United States alone. If this occurred, Katerra would take around 30% of the mid-rise construction market in the company’s local area. Yet its building platform consists of only twelve apartment types. Katerra may see the physical homogeneity as an enormous advantage as it increases the sustainability of its projects. This choice facilitates financial speculation, as the repetition of similar flats reduces the number of factors in the valuing of apartments and allows quicker monetary exchange, freed from many variables. Sustainability refers not to any materiality but to the predictability of its investments. Variability is still desired, but oriented towards finance and not to inhabitants. Beyond the financialisation of a home, digital value machines create their own realities purely through the practice of virtual operations.

Figure 4 – The hyper-dimensional spaces of the digital economy are incompatible with cellular architecture. With every dimension added, the hull will gain weight until it absorbs more space than its content. From pure mathematical calculations, the dividends associated with the living cell and count more than its inhabitants. Image: Daniel Koehler, 2019.

Here one encounters a new type of spatial production: the spatial practice of representations. At the beginning of what was referred to as “late capitalism”, the sociologist and philosopher Henri Lefebvre proposed three spatialities which described modes of exchange through capitalism.[11] The first mode, a spatial practice referred to a premodern condition, which by the use of analogies interlinked objects without any forms of representation—the second, representations of space linked directly to production, the organic schemes of modernism. The third representational spaces express the conscious trade with representations, the politics of postmodernism, and their interest in virtual ideas above the pure value of production. Though not limited to three only, Lefebvre’s intention was to describe capitalism as “an indefinite multitude of spaces, each one piled upon, or perhaps contained within, the next”.[12] Lefebvre differentiated the stages in terms of their spatial abstraction. Incrementally, virtual practices transcended from real-to-real to virtual-to-real to virtual-to-virtual. But today, decoupled from the real, a virtual economy computes physically within spatial practices of representations. Closing the loop, the real-virtual-real, or new hyper-parts, do not subordinate the physical into a virtual representation, instead, the virtual representation itself acts in physical space.

This reverses the intention of modernism orientated towards an organic architecture by representing the organic relationships of nature in geometric thought. The organicism of today’s hypercomputation projects geometric axioms at an organic resolution. What was once a representation and a geometry distant from human activity, now controls the preservation of financial predictability.

The Inequalities Between the Parts of the Virtual and the Parts of the Real

Beyond the human body, this new spatial practice of virtual parts today transcends the digital project that was limited to a sensorial interaction of space. This earlier understanding of the digital project reduced human activity to organic reflexes only, thus depriving architecture of the possibility of higher forms of reflection, thought and criticism. Often argued through links to phenomenology and Gestalt theory, the simplification of architectural form to sensual perception has little to do with phenomenology itself. Edmund Husserl, arguably the first phenomenologist, begins his work with considering the perception of objects, not as an end, but to examine the modes of human thinking. Examining the logical investigations, Husserl shows that thought can build a relation to an object only after having classified it, and therefore, partitioned it. By observing an object before considering its meaning, one classifies an object, which means identifying it as a whole. Closer observations recursively partition objects into more unaffected parts, which again can be classified as different wholes.[13] Husserl places parts before both thought and meaning.

Figure 5 – Mereologies, 2016. Image(s): (top) Genmao Li, RC17, MArch Urban Design, B-Pro, The Bartlett School of Architecture, UCL, 2016; (bottom) Zhiyuan Wan, Chen Chen, Mengshi Fu, RC17, MArch Urban Design, B-Pro, The Bartlett School of Architecture, UCL, 2016.

Derived from aesthetic observations, Husserl’s mereology was the basisof his ethics, and was therefore concluded in societal conceptions. In his later work, Husserl’s analysis is an early critique of the modern sciences.[14] For Husserl, in their efforts to grasp the world objectively, the sciences have lost their role in enquiring into the meaning of life. In a double tragedy, the sciences also alienated human beings from the world. Husserl thus urged the sciences to recall that they ground their origins in the human condition, as for Husserl humanism was ultimately trapped in distancing itself further from reality.

One hundred years later, Husserl’s projections resonate in “speculative realism”. Coined By Levi Bryant as “strange mereology”,[15] objects, their belongings, and inclusions are increasingly strange to us. The term “strange” stages the surprise that one is only left with speculative access. However, ten years in, speculation is not distant anymore. That which transcends does not only lurk in the physical realm. Hyper-parts figurate ordinary scales today, namely housing, and by this transcend the human(e) occupation.

Virtual and physical space are compositionally comparable. They both consist of the same number of parts, yet they do not. If physical elements belong to a whole, then they are also part of that to which their whole belongs. In less abstract terms, if a room is part of an apartment, the room is also part of the building to which the apartment belongs. Materially bound part relationships are always transitive, hierarchically nested within each other. In virtual space and the mathematical models with which computers are structured today, elements can be included within several independent entities. A room can be part of an apartment, but it can also be part of a rental contract for an embassy. A room is then also part of a house in the country in which the house is located. But as part of an embassy, the room is at the same time part of a geographically different country on an entirely different continent than the building that houses the embassy. Thus, for example, Julian Assange, rather than boarding a plane, only needed to enter a door on a street in London to land in Ecuador. Just with a little set theory, in the virtual space of law, one can override the theory of relativity with ease.

Parts are not equal. Physical parts belong to their physical wholes, whereas virtual parts can be included in physical parts but don’t necessarily belong to their wholes.  Far more parts can be included in a virtual whole than parts that can belong to a real whole. When the philosopher Timothy Morton says “the whole is always less than the sum of its parts”,[16] he reflects the cultural awareness that reality breaks due to asymmetries between the virtual and the real. A science that sets out to imitate the world is constructing its own. The distance which Husserl spoke of is not a relative distance between a strange object and its observer, but a mereological distance, when two wholes distance each other because they consist of different parts. In its effort to reconstruct the world in ever higher resolution, modernism, and in its extension the digital project, has overlooked the issue that the relationship between the virtual and the real is not a dialogue. In a play of dialectics between thought and built environment, modernism understood design as a dialogue. In extending modern thought, the digital project has sought to fulfill the promise of performance, that a safe future could be calculated and pre-simulated in a parallel, parametric space. Parametricism, and more generally what is understood as digital architecture, stands not only for algorithms, bits, and rams but for the far more fundamental belief that in a virtual space, one can rebuild reality. However, with each resolution that science seeks to mimic the world, the more parts it adds to it.

Figure 6 – Illustrations of exemplary stairs constructed through cubes, Sebastiano Serlio, 1566. Image: public domain.

The Poiesis of a Virtual Whole

The asymmetry between physical and virtual parts is rooted in Western classicism. In early classical sciences, Aristotle divided thinking into the trinity of practical action, observational theory and designing poiesis. Since the division in Aristotle’s Nicomachean Ethics, design is a part of thought and not part of objects. Design is thus a knowledge, literally something that must first be thought. Extending this contradiction to the real object, design is not even concerned with practice, with the actions of making or using, but with the metalogic of these actions, the in-between between the actions themselves, or the art of dividing an object into a chain of steps with which it can be created. In this definition, design does not mean to anticipate activities through the properties of an object (function), nor to observe its properties (materiality), but through the art of partitioning, structuring and organising an object in such a way that it can be manufactured, reproduced and traded.

To illustrate poiesis, Aristotle made use of architecture.[17] No other discipline exposes the poetic gap so greatly between theory, activity and making. Architecture first deals with the coordination of the construction of buildings. As the architecture historian Mario Carpo outlines in detail, revived interest in classicism and the humanistic discourse on architecture began in the Renaissance with Alberti’s treatise: a manual that defines built space, and ideas about it solely through word. Once thought and coded into words, the alphabet enabled the architect to physically distance from the building site and the built object.[18] Architecture as a discipline then does not start with buildings, but with the first instructions written by architects used to delegate the building.

A building is then anticipated by a virtual whole that enables one to subordinate its parts. This is what we usually refer to as architecture: a set of ideas that preempt the buildings they comprehend. The role of the architect is to imagine a virtual whole drawn as a diagram, sketch, structure, model or any kind of representation that connotates the axes of symmetries and transformations necessary to derive a sufficient number of parts from it. Architectural skill is then valued by the coherence between the virtual and the real, the whole and its parts, the intention and the executed building. Today’s discourse on architecture is the surplus of an idea. You might call it the autopoiesis of architecture – or merely a virtual reality. Discourse on architecture is a commentary on the real.

Adrian Bowyer (left) and Vik Olliver (right) with a parent RepRap machine, and the first child machine, made by the RepRap on the left. Image in public domain.
Figure 7 – Adrian Bowyer (left) and Vik Olliver (right) with a parent RepRap machine, and the first child machine, made by the RepRap on the left. Image: public domain.

Partitioning Architectures

From the very outset, architecture distanced itself from the building, yet also aimed to represent reality. Virtual codes were never autonomous from instruments of production. The alphabet and the technology of the printing press allowed Alberti to describe a whole ensemble distinct from a real building. Coded in writing, printing allowed for the theoretically infinite copies of an original design. Over time, the matrices of letters became the moulds of the modern production lines. However, as Mario Carpo points out, the principle remained the same.[19] Any medium that incorporates and duplicates an original idea is more architecture than the built environment itself. Belonging to a mould, innovation in architecture research could be valued in two ways. Quantitatively, in its capacity to partition a building in increasing resolution. Qualitatively, in its capacity to represent a variety of contents with the same form. By this, architecture faced the dilemma that one would have to design a reproducible standard that could partition as many different forms as possible to build non-standard figurations.[20]

The dilemma of the non-standard standard moulds is found in Sebastiano Serlio’s transcription of Alberti’s codes into drawings. In the first book of his treatise, Serlio introduces a descriptive geometry to reproduce any contour and shape of a given object through a sequence of rectangles.[21] For Serlio, the skill of the architect is to simplify the given world of shapes further until rectangles become squares. The reduction finally enables the representation of physical reality in architectural space using an additive assembly of either empty or full cubes. By building a parallel space of cubes, architecture can be partitioned into a reproducible code. In Serlio’s case, architecture could be coded through a set of proportional ratios. However, from that moment on, stairs do not consist only of steps, and have to be built with invisible squares and cubes too.

Today, Serlio’s architectural cubes are rendered obsolete by 3D printed sand. By shrinking parts to the size of a particle of dust, any imaginable shape can be approximated by adding one kind of part only. 3D printing offers a non-standard standard, and with this, five hundred years of architectural development comes to an end.

Figure 8 – Von Neumann’s illustrations describing automata as a set of linkages between nodes. Image: Arthur W. Burks, 1969, public domain.

Replicating: A Spatial Practice of Representations

3D printing dissolved existing partitioning parts to particles and dust. A 3D-printer can not only print any shape but can also print at any place, at any time. The development of 3D printing was mainly driven by DIY hobbyists in the Open Source area. One of the pioneering projects here is the RepRap project, initiated by Adrian Bowyer.[22] RepRap is short for replicating rapid prototyping machine. The idea behind it is that if you can print any kind of objects, you can also print the parts of the machine itself. This breaks with the production methods of the Modern Age. Since the Renaissance, designers have crafted originals and used these to build a mould from those so that they can print as many copies as possible. This also explains the economic valuation of the original and why authorship is so vehemently protected in legal terms. Since Alberti’s renunciation of drawings for a more accurate production of his original idea through textual encoding, the value of an architectural work consisted primarily in the coherence of a representation with a building: a play of virtual and real. Consequently, an original representation that cast a building was more valued than its physical presentation. Architecture design was oriented to reduce the amount of information needed to cast. This top-down compositional thinking of original and copy becomes obsolete with the idea of replication.

Since the invention of the printing press, the framework of how things are produced has not changed significantly. However, with a book press, you can press a book, but with a book, you can’t press a book. Yet with a 3D printer, you can print a printer. A 3D printer does not print copies of an original, not even in endless variations, but replicates objects. The produced objects are not duplicates because they are not imprints that would be of lower quality. Printed objects are replicas, objects with the same, similar, or even additional characteristics as their replicator.

Figure 9 – Lionel R. Penrose, drawing for a physical implementation of a self-replicating chain of 3 units in length. Image: Photograph f40v, Galton Laboratory Archive, University College London, 1955.

A 3D printer is a groundbreaking digital object because it manifests the foundational principle of the digital – replication – on the scale of architecture. The autonomy of the digital is based not only on the difference between 0 and 1 but on the differences in their sequencing. In mathematics in the 1930s, the modernist project of a formal mimicry of reality collapsed through Godel’s proof of the necessary incompleteness of all formal systems. Mathematicians then understood that perhaps far more precious knowledge could be gained if we could only learn to distance ourselves from its production. The circle of scientists around John von Neumann, who developed the basis of today’s computation, departed from one of the smallest capabilities in biology: to reproduce. Bits, as a concatenation of simple building blocks and the integrated possibility of replication, made it possible, just by sequencing links, to build first logical operations, and connecting those programs to today’s artificial networks.[23] Artificial intelligence is artificial but it is also alive intelligence.

To this day, computerialisation, not computation is at work in architecture. By pursuing the modern project of reconstructing the world as completely as possible, the digital project computerised a projective cast[24] in high resolution. Yet this was done without transferring the fundamental principles of interlinking and replication to the dimensions of the built space.

Figure 10 – (left to right) Mereologies: WanderYards, 2016, Genmao Li, Chen Chen, and Xixuan Wang, 2016; Enframes, Kexin Cao, Yue Jin, Qiming Li, 2017; iiOOOI, Sheghaf Abo Saleh, Hua Li, Chuwei Ye, Yaonaijia Zhou, 2018 (right). Image(s): RC17, MArch Urban Design, The Bartlett School of Architecture, UCL, 2016-2018.

From Partitioning to Partaking

The printing press depends on a mould to duplicate objects. The original mould was far more expensive to manufacture than its copies, so the casting of objects had to bundle available resources. This required high investments in order to start production, leading to an increasing centralisation of resources in order to scale the mass-fabrication of standard objects for production on an assembly line. Contrarily, digital objects do not need a mould. Self-replication provided by 3D printing means that resources do not have to be centralised. In this, digital production shifts to distributed manufacturing.[25]

Independent from any mould, digital objects as programs reproduce themselves seamlessly at zero marginal costs.[26] As computation progresses, a copy will then have less and less value. Books, music and films fill fewer and fewer shelves because it no longer has value to own a copy when they are ubiquitously available online. And the internet does not copy; it links. Although not fully yet integrated into its current TCP-IP protocol,[27] the basic premise of hyperlinking is that linked data adds value.[28] Links refer to new content, further readings, etc. With a close to infinite possibility to self-reproduce, the number of objects that can be delegated and repeated becomes meaningless. What then counts is hyper-, is the difference in kind between data, programs and, eventually, building parts. In his identification of the formal foundations of computation, the mathematician Nelson Goodman pointed out that beyond a specific performance of computation, difference, and thus value, can only be generated when a new part is added to the fusion of parts.[29] What is essential for machine intelligence is the dimensionality of its models, e.g., the number of its parts. Big data refers less to the amount of data, but more to the number of dimensions of data.[30]

Figure 11 – Enframes, 2017. Image: Kexin Cao, Yue Jin, Qiming Lim, RC17, MArch Urban Design, The Bartlett School of Architecture, UCL, 2017.

With increasing computation, architecture shifted from an aesthetic of smoothness that celebrated the mastership of an infinite number of building parts to roughness. Roughness demands to be thought (brute). The architecture historian Mario Carpo is right to frame this as nostalgic, as “digital brutalism”.[31] Similar to brutalism that wanted to stimulate thought, digital roughness aims to extend spatial computability, the capability to extend thinking, and the architecture of a computational hyper-dimensionality. Automated intelligent machines can accomplish singular goals but are alien to common reasoning. Limited around a ratio of a reality, a dimension, a filter, or a perspective, machines obtain partial realities only. Taking them whole excludes those who are not yet included and that which can’t be divided: it is the absolute of being human(e).

A whole economy evolved from the partial particularity of automated assets ahead of the architectural discipline. It would be a mistake to understand the ‘sharing’ of the sharing economy as having something “in common”. On the contrary, computational “sharing” does not partition a common use, but enables access to multiple, complementary value systems in parallel.

Figure 12 – Physical model, WanderYards, 2017. Image: Genmao Li, Chen Chen and Xixuan Wang, RC8, MArch Architecture Design, The Bartlett School of Architecture, UCL, 2017.

Cities now behave more and more like computers. Buildings are increasingly automated. They use fewer materials and can be built in a shorter time, at lower costs. More buildings are being built than ever before, but fewer people can afford to live in them. The current housing crisis has unveiled that buildings no longer necessarily need to house humans or objects. Smart homes can optimise material, airflow, temperature or profit, but they are blind to the trivial.

Figure 13 – Physical model, Slabrose, 2019. Image: Dongxin Mei, Zhiyuan Wan, Peiwen Zhan, and Chi Zhou, RC17, MArch Urban Design, The Bartlett School of Architecture, UCL, 2019.

It is a mistake to compute buildings as though they are repositories or enclosures, no matter how fine-grain their resolution is. The value of a building is no longer derived only from the amount of rent for a slot of space, but from its capacities to partake with. By this, the core function of a building changes from inhabitation to participation. Buildings do not anymore frame and contain: they bind, blend, bond, brace, catch, chain, chunk, clamp, clasp, cleave, clench, clinch, clutch, cohere, combine, compose, connect, embrace, fasten, federate, fix, flap, fuse, glue, grip, gum, handle, hold, hook, hug, integrate, interlace, interlock, intermingle, interweave, involve, jam, join, keep, kink, lap, lock, mat, merge, mesh, mingle, overlay, palm, perplex, shingle, stick, stitch, tangle, tie, unit, weld, wield, and wring.

In daily practice, BIM models do not highlight resolution but linkages, integration and collaboration. With further computation, distributed manufacturing, automated design, smart contracts and distributed ledgers, building parts will literally compute the Internet of Things and eventually our built environment, peer-to-peer, or better, part-to-part – via the distributive relationships between their parts. For the Internet of Things, what else should be its hubs besides buildings? Part-to-part habitats can shape values through an ecology of linkages, through a forest of participatory capacities. So, what if we can participate in the capacities of a house? What if we no longer have to place every brick, if we no longer have to delegate structures, but rather let parts follow their paths and take their own decisions, and let them participate amongst us together in architecture?

Figure 14 – Interior view of physical model, NPoche, 2018. Image: Silu Meng, Ruohan Xu, and Qianying Zhou. RC17, MArch Urban Design, The Bartlett School of Architecture, UCL, 2018.
Figure 15 – Seggregational section, WanderYards, 2017. Image: Genmao Li, Chen Chen and Xixuan Wang, RC17, MArch Urban Design, The Bartlett School of Architecture, UCL, 2017.

References

[1] S. Kostof, The City Assembled: The Elements of Urban Form Through History (Boston: Little, Brown and Company, 1992).

[2] J. Aspen, "Oslo – the triumph of zombie urbanism." Edward Robbins, ed., Shaping the city, (New York: Routledge, 2004).

[3] The World Bank actively promotes housing as an investment opportunity for pension funds, see: The World Bank Group, Housing finance: Investment opportunities for pension funds (Washington: The World Bank Group, 2018).

[4] G. M. Asaftei, S. Doshi, J. Means, S. Aditya, “Getting ahead of the market: How big data is transforming real estate”, McKinsey and Company (2018).

[5] G. Deleuze, “Postscript on the societies of control,” October, 59: 3–7 (1992), 6.

[6] Ibid, 4.

[7] Ibid, 6.

[8] R. Braidotti, Posthuman Knowledge (Medford, Mass: Polity, 2019).

[9] T. Piketty, Capital and Ideology (Cambridge, Mass: Harvard University Press, 2020).

[10] A. McAfee, E. Brynjolfsson, Machine, platform, crowd: Harnessing our digital future (New York: W.W. Norton & Company, 2017).

[11] H. Lefebvre, The Production of Space (Oxford: Basil Blackwell, 1991), 33.

[12] Ibid, 8.

[13] E. Husserl, Logische Untersuchungen: Zweiter Teil Untersuchungen zur Phänomenologie und Theorie der Erkenntnis.trans. "Logical investigations: Part Two Investigations into the phenomenology and theory of knowledge" (Halle an der Saale: Max Niemeyer, 1901).

[14] E. Husserl, Cartesianische Meditationen und Pariser Vortraege. trans. "Cartesian meditations and Parisian lectures" (Haag: Martinus Nijhoff, Husserliana edition, 1950).

[15] L. Bryant, The Democracy of Objects (Ann Arbor: University of Michigan Library, 2011).

[16] T. Morton, Being Ecological (London: Penguin Books Limited, 2018), 93.

[17] Aristotle, Nicomachean Ethics 14, 1139 a 5-10.

[18] M. Carpo, Architecture in the Age of Printing (Cambridge, Mass: MIT Press, 2001).

[19] M. Carpo, The Alphabet and the Algorithm (Cambridge, Mass: MIT Press, 2011).

[20] F. Migayrou, Architectures non standard (Editions du Centre Pompidou, Paris, 2003).

[21] S. Serlio, V. Hart, P. Hicks, Sebastiano Serlio on architecture (New Haven and London: Yale University Press, 1996).

[22] R. Jones, P. Haufe, E. Sells, I. Pejman, O. Vik, C. Palmer, A. Bowyer, “RepRap – the Replicating Rapid Prototyper,” Robotica 29, 1 (2011), 177–91.

[23] A. W. Burks, Von Neumann's self-reproducing automata: Technical Report (Ann Arbor: The University of Michigan, 1969).

[24] R. Evans, The Projective Cast: Architecture and Its Three Geometries (Cambridge, Massachusetts: MIT Press, 1995).

[25] N. Gershenfeld, “How to make almost anything: The digital fabrication revolution,” Foreign Affairs, 91 (2012), 43–57.

[26] J. Rifkin. The Zero Marginal Cost Society: The Internet of Things, the Collaborative Commons, and the Eclipse of Capitalism (New York: Palgrave Macmillan, 2014).

[27] B. Bratton, The Stack: On Software and Sovereignty (Cambridge, Massachusetts: MIT Press, 2016).

[28] J. Lanier, Who Owns the Future? (New York: Simon and Schuster, 2013).

[29] N. Goodman, H. S. Leonard, “The calculus of individuals and its uses,” The Journal of Symbolic Logic, 5, 2 (1940), 45–55.

[30] P. Domingos, The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World (London: Penguin Books, 2015).

[31] M. Carpo, “Rise of the Machines,” Artforum, 3 (2020).

Suggest a Tag for this Article
Church Sainte Geneviève, Clovis, 502. Drawing by Jean-Baptiste Rondelet, 1810. Image: Thomas Thibaut, 2018.
Matter versus Parts: The Immaterialist Basis of Architectural Part-Thinking
Architecture, Discrete Architecture, Form, Matter, Mereologies, Mereology
Jordi Vivaldi
Institute for Advanced Architecture of Catalonia, University College London and University of Innsbruck
jordivivaldipiera@gmail.com
Add to Issue
Read Article: 8094 Words

“Digital Matter”; “Intelligent Matter”’; “Behavioural Matter”; “Informed Matter”; “Living Matter”, “Feeling Matter”; “Vibrant Matter”; “Mediated Matter”; “Responsive Matter”; “Robotic Matter”; “Self-Organised Matter”; “Ecological Matter”; “Programmable Matter”; “Active Matter”; “Energetic Matter”. There is no term enjoying better reputation in today’s experimental architectural discourse. Gently provided by a myriad of studios hosted in pioneer universities around the world, the previous expressions illustrate the redemption of a notion that has traditionally been dazzled by form’s radiance. After centuries of irrelevance, “Matter” has recently become a decisive term; it illuminates not just the field of experimental architecture, but the whole spectrum of our cultural landscape: several streams in philosophy, art and science have vigorously embraced it, operating under the gravitational field of its holistic and non-binary constitution.

However, another Copernican Revolution is flipping today’s experimental academic architecture from a different flank. In parallel to matter’s redemption and after the labyrinthic continuums characteristic of the ’90s, discreteness claims to be the core of a new formal paradigm. Beside its Promethean vocation and renewed cosmetics, the discrete design model restores the relevance of a term that traditionally has been fundamental in architecture: the notion of part. However, in opposition to previous architectural modulations, part’s current celebration is traversed by a Faustian desire for spatial and ontological agency, which severely precludes any reverential servitude to its whole.

The singular coincidence of matter’s revival on the one side and the discrete turn on the other opens a debate in relation to its possible conflicts and compatibilities in the field of experimental architecture. In this essay, the discussion gravitates around one single statement: the impossibility of a materialist architectural part-thinking. The argument unfolds by approaching a set of questions and analysing the consequences of its possible answers: how matter’s revival contributes to architectural part thinking? Is matter’s revival a mere importation of formal attributes? Which are the requirements for a radical part-thinking in architecture? Is matter well equipped for this endeavour? In short, are the notions of matter and part-thinking compatible in an architectural environment?

Pre-Socratic philosophy defined matter as a formless primordial substratum that constitutes all physical beings. Its irrevocable condition is that of being “ultimate”: matter lies in the depth of reality as more fundamental than any definite thing.[1] Under this umbrella, pre-Socratic philosophy ramifies in two branches: the first one associates matter with continuity, the second one associates matter with discretism.

Anaximander is the standard-bearer of the first type: the world is pre-individual in character and it is fueled by the apeiron, a continuum to which all specific structures can be reduced. We can find traces of this sort of materialism in Gilles Deleuze’s “plane of immanence”, Bruno Latour’s “plasma”, or Jane Bennett’s “vibrant matter”. Democritus is the figurehead of the second type: the world is composed by sets of atoms, that is, privileged discrete physical elements whose distinct combinations constitute the specific entities that populate the world. Resonances of this sort of materialism can be found in the “quanta” of contemporaneous quantum mechanics. Independently of their continuous or discrete nature, both types of materialisms are underpinned by an ontological assumption: the identification of matter with an ultimate cosmic whole. To this purpose, matter’s generic condition is decisive: its lack of specificity is precisely what grants matter the status of “ultimate”, which logically and chronologically precedes distinction.

Architecture’s conceptualisation of matter has not been impermeable to these philosophical discourses. In spite of the negative reputation that the Aristotelian hylomorphism projected on matter by converting it into the reverential servant of form – absent in pre-Socrátic philosophy and being introduced, in different ways, by Plato and Aristotle – in the last centuries many architectural projects opposed this status quo by capitalising on both types of materialism. Since the Enlightenment and still under form’s reign, matter has been recovering its pre-Socratic positive character by absorbing all the attributes traditionally ascribed to form. However, it also operated a conceptual replacement that is crucial in this discussion: matter moved from a marginal role in a hylomorphic dualist scheme to the solitary leadership of an ultimate holism. As we will see below, in architecture and particularly since the Enlightenment, matter’s relevance has been gradually recovered through its association with two key concepts: truthfulness, emphasised by authors of the late 18th and 19th century such as Viollet le Duc or Gottfried Semper, and vitalism, underlined by authors of the 19th century and early 20th century such as Henry Bergson or Henri Focillon.[2] Today this process has culminated with Eric Sadin’s notion of antrobology, that is, the “increasingly dense intertwining between organic bodies and ‘immaterial elfs’ (digital codes), that sketches a complex and singular composition which is determined to evolve continually, contributing to the instauration of a condition which is inextricably mixed ‘human/artificial.”[3]

In this technological framework and through the notions of information, platform and performance, matter’s traditional attributes have been replaced by those of form. Despite keeping the term “matter” as a signifier, the disorder, passivity and homogeneity that conventionally characterised its significance have been substituted by form’s structure, activity and heterogeneity. However, one crucial feature that is absent in the dualistic hylomorphic model has been reintroduced: matter’s pre-Socratic condition of being ultimate.

This incorporation is decisive when it comes to architectural part-thinking. In spite of the great popularity that matter has achieved within contemporary experimental architecture, its ultimate condition precludes any engagement with architectural part-thinking: either as a single continuous field or as a set of discrete particles, matter exalts a single holistic medium that lies at the core of reality, that is, a fundamental substrata (whole) in which all specific entities (parts) can be reduced. In a context in which designers use the power of today’s super computation to notate the inherent discreteness of reality instead of reducing it to simplified mathematical formulas,[4] or field, reality’s approach through generic and Euclidean points (particles) rather than distinct elements (parts) constitutes an unnecessary process of reduction that dissolves part’s autonomy.  

This essay develops this argument in two steps. First, it states that the current culmination of matter’s revival process in experimental architecture is, paradoxically, nothing but the exaltation of form; under the same signifier, matter’s signification has been replaced by form’s signification: all attributes that in the hylomorphic model were associated with the latter have now moved to the former, converting matter’s signifier into just another term to conjure up the significance of form. However, there is a crucial pre-Socratic introduction in relation to the hylomorphic model: matter is now understood as being also the ultimate single substance of reality, and not just the compliant serf of another element (form). This holistic vocation can be traced in contemporaneous experimental architecture in parallel to matter’s pre-Socratic distinction between a continuous field (Anaximander’s apeiron) and a discrete set of particles (Democritus’s atoms).

Second, this essay argues that current materialism, in any of its twofold registers, is incompatible with architectural part-thinking. The argument first identifies and evaluates three groups of architectural parts (topological, corpuscular and ecological) in the current experimental architectural landscape and second proposes a fourth speculative architectural part based on the notion of limit. If the idea of part demands a certain degree of autonomy from the whole, it cannot be reducible to any ultimate substrata, and therefore matter’s holistic condition becomes problematic both in its continuous and discrete register. However, the latter demands particular attention: discretism’s spatial countability might lead us to confuse the notion of particle with that of part. However, they significantly differ: while particles are discrete only from a mathematical perspective (countable), parts are discrete as well from an ontological perspective (distinct). Parts require at least both dimensions of discreteness in order to be considered autonomous from any exteriority, while simultaneously keeping its capacity to participate in it.

Architectural part-thinking demands then a radical formal approach. It requires a notion of form that operates at every level of scale, that is, an immaterialist model that recursively avoids any continuous (field) or discrete (particle) ultimate substrata in which parts could be reduced. This pan-formalism would imply then the presence of a form beyond any given form, understanding the term “form” as an autonomous spatio-temporal structure. 

Matter’s Recovery Process in Architecture: Truthfulness, Vitalism and Antrobology

Since Ancient Greece, architecture has interpreted the notion of matter through Aristotle’s hylomorphic scheme: matter is a disordered, passive and homogenous mass (matter) in attendance for a structured, active and heterogeneous pattern (form). According to this framework the architect is constituted as a demiurge: they operate from a transcendent plane in order to inform matter, that is, in order to structure its constitution through a defined pattern. However, since the Enlightenment, matter’s signifier has gradually replaced its signification with that of form through three concatenated strategies: truthfulness, vitalism and antrobology.

The concept of truthfulness in architecture should be read in opposition to the idealism of authors like Alberti or Palladio. In his De Re-aedificatoria, Alberti claimed that “architecture is not about artisan techniques but about ‘cose mentale’.”[5] What concerned him was not material attributes such as colour or texture, but the geometrical proportions of the forms that he produced with matter. This statement becomes evident in his façade for the Malatesta Temple in 1450.

Figure 1 – Malatesta Temple, Alberti, c. 1450. Image: Paolo Monti, Servizio fotografico, Rimini, 1972.

Conversely, some centuries later authors like Ruskin, Viollet-le-Duc or Semper defended the relevance of matter in architecture, asserting that the choice of a material should depend on the laws dictated by its nature, such that “brick should look like brick; wood, wood; iron, iron, each according to its own mechanical laws.”[6] Rondelet and Choissy also gave importance to the truth of the material, particularly throughout their exhaustive constructive drawings.

Figure 2 – Church Sainte Geneviève, Clovis, 502. Drawing by Jean-Baptiste Rondelet, 1810. Image: Thomas Thibaut, 2018.

However, this group of authors still remained idealistic: the use of materials was determined by the idea that the architectural object was intended to express. In that sense, and although its internal structure was recognised, matter was still subordinate to an external idea, that is, to an external form. 
Some decades later, in his Life of Forms in Art (1881) Henri Focillon dignified matter through a strategy based on a different concept: vitalism. Although arguing that the development of art is inextricably linked to external socio-politic and economic characteristics, Focillon associated an autonomous formal mutation to it through underlining matter’s inherent capacity of movement and metamorphosis. Already present in the Baroque and empowered by the Enlightenment’s idea of “natura naturans”, concepts like the “Bildungstrieb”, the “Thatkraft” or the “Urpflanze” articulated a vitalist approach to matter closely related to German Expressionism. Ruskin and Semper’s seminal materialism based on material’s truth gave way to a radical pragmatism in which architects used hybridised materials in order to relate to natural metamorphosis. Many glass-based projects from the early 20th century replicate these morphogenetic processes, an attitude already present in the gothic. In resonance with Bergson’s élan vital, a hypothetical force that explains the evolution and development of organisms, certain uses of concrete imitated the formal exuberance of some morphogenetic natural processes, as can be seen in the Goetheanum from Rudolph Steiner in 1928 or Einstein Tower from Erich Mendelsohn in 1921, but also with different materials in the Großes Schauspielhaus from Hans Poelzig in 1919.

Figure 3 – Großes Schauspielhaus, Poelzig, 1919. Image: Image on paper 18,5 x 24,2, Architekturmuseum der Technischen Universität Berlin, 1919.

Moreover, the use of concrete established a continuity between form and structure characteristic of the organic beings that were so greatly admired at that time. As a consequence, a progressive material vitalism was thus constituted through an hylozoic approach based on Einstein’s theories of matter and energy interconvertibility, which suggested a comprehension of matter as a set of energetical perturbations instead of mere inert mass. In this sense and according to Henry van de Velde, matter had not only a mechanical value, but an active dispositionality that was the consequence of its “formal vocation”. However, vitalism had also its conservative reverse. Fueled by the phenomenological work of Rasmussen and Norberg-Schulz, architects such as Herzog & Meuron, Steven Holl or Peter Zumthor propose a haptic approach to architecture that relies on materials as symbolic shapers of architectural space. Under this scenario and in close relation to Merleau-Ponty’s notion of “flesh”, matter is still understood as a holistic repository of tactile and cultural memory.

In parallel to the general disdain that Modernism showed for materiality during the first half of the 20th century, according to Eduardo Prieto truthfulness and vitalism have gradually contributed to the reconsideration of matter as a substance with a certain agency.[7] This process was based not on the exaltation of the passivity, neutrality and homogeneity that originally characterised matter, but on the importation of attributes from the notion of form. Ruskin’s truthfulness is based precisely on the understanding that matter has a specific inner character that makes it heterogeneous, while the vitalism of Steiner alludes to the metamorphic capacities of living beings.

However, both cases remain idealistic. Truthfulness asserts the need for an external form to choose the matter that best suits its purposes. Vitalism claims that matter should be seen as a material of organic expression that still needs an artist or architect to unveil its aesthetic potentialities of metamorphosis. In both cases, matter is still seen not just in opposition to an external form, but also under its control. In this sense, the vitalism defended by Bergson differs from the vitalism of Deleuze: for the former, matter is still a generic substance that needs an artist to particularise it, that is, needs an élan vital to form it. Conversely, for Deleuze, matter is an immanent reality: it provides form to itself and does not require any transcendental agent. This Deleuzian conception of matter has been emphasised today through New Materialism, whose statements in relation to the problem matter-form are based “on the idea that matter has morphogenetic capacities of its own and does not need to be commanded into generating form.”[8] In this sense, matter is no longer seen in opposition to form, that is, “it is not a substrate or a medium for the flow of desire, but it is always already a desiring dynamism, a reiterative reconfiguring, energised and energising, enlivened and enlivening.”[9] 

This philosophical approach reverberates with our current technological condition. After the stages of truthfulness and vitalism, Sadin’s antrobology culminates an architectural recovery of matter that paradoxically is based in the replacement of its signification by that of form. Faced with a dual ontology that is no longer alluding to Heideggerian human nudity but to a planet inhabited by algorithmic beings that live with and against us, Eric Sadin defines our technological scenario as Antrobological. This notion expresses the “increasingly dense intertwining between organic bodies and ‘immaterial elfs’ (digital codes).”[10] The propagation of artificial intelligence and the multi-scalar robotisation of the organic establishes, in addition to a change of medium, a change of condition: its algorithmic power does not merely offer itself as an automatic pilot for daily life, but it also triggers a radical transformation of our human nature, setting up a perennial and universal intertwining in between bodies and information. In this sense, the multidisciplinary generalisation of machine learning, progress in genetic engineering or the robotisation of the mundane no longer refer to a humanity that is merely improved or enriched, but to a humanity that is intertwined: it is unfolded through a physiological platform that is woven by algorithmic, organic, robotic and ecologic agents whose symbiosis is not metaphorical or narrative, but strictly performative. It is precisely under this scenario that “artificial extelligence” becomes “artificial intelligence”: it executes an exercise of incorporation in which the intelligence, eidos, or what has traditionally been understood as form is no longer an external entity that articulates matter from outside, but is its immanent circumstance.

The historical and incremental process of matter legitimation, based initially on the truthfulness of Ruskin and the vitalism of Steiner, culminates today with the celebration of the notions of platform, information and performance that singularise Sadin’s antrobology. Recent theorisations on concepts related to computation and design such as Keller Easterling’s “medium”[11] or Benjamin Bratton’s “stack”[12] are as well deeply underpinned by these three expressions. However, it is crucial to note that the term “form” is present in all of them, associating each expression to one of the three main form’s attributes: structure (information), activity (performance) and heterogeneity (platform). 

While matter “is that which resists taking any definite contour”,[13] form refers to the active presence of a distinguished and qualified non-ultimate structure containing other forms at every level of scale and that can occasionally change and establish relationships. It is under this framework that the previous terms should be read in relation to experimental architecture. To provide a platform means to provide the conditions for an evolving intertwining in between forms that permits the promiscuous co-existence of difference, that is, of heterogeneity. Thus, a platform is not a field: in opposition to the latter, the former doesn’t permit any sort of reductionism, that is, its elements are not mere emergences, as occurs with fields, but singularities with distinct origins. To provide information means to provide structure: it precludes disorder by establishing a spatio-temporal non-ultimate organisation. However, given that every entity already has a form and we cannot imagine a formless element, to inform means actually to transform. To provide performance, in contrast, means to present rather than represent: it produces an operative impact on the set of conditions in which it is placed, instead of merely representing an absent entity, as would be the case of a metaphor.

Under Sadin’s antrobology, the disorder, passivity and homogeneity that traditionally identified matter are replaced by those characteristics that qualified form in the hylomorphic model: structure (information), activity (performance) and heterogeneity (platform). However, if the process of legitimation of matter is rooted in replacing its attributes by those of form, it is increasingly more unsustainable to keep referring to it as “matter”, when actually, especially in Sadin’s antrobology and from a hylomorphic point of view, matter is actually empty of matter and full of form.

Matter’s Ultimate Condition and Part-Thinking

However, the rupture of the hylomorphic dichotomy caused by matter’s absorption of form has implied the introduction of a pre-Socratic matter’s condition: that of being ultimate. Matter is not understood anymore as one of the components of a dualistic model, but as a single holistic substance whose structure, activity and heterogeneity underlies the emergence of any specific entity. This model, technologically underpinned by Sadin’s antrobology, has been articulated by contemporaneous experimental architecture according to the two types of materialism that differentiate pre-Socratic philosophy: as a continuous field (Anaximander’s apeiron) or as discrete particles (Democritus’s atoms). However, its common “ultimate” condition obstructs architectural part-thinking: if the notion of part demands an autonomy that cannot be exhausted neither in its outer participation in a bigger ensemble nor in its inner constitution through a smaller ensemble, matter’s holism becomes problematic. Indeed, if any entity (part) can be deduced from a privilege underlying substrata (whole), its autonomy is called into question.

Anaximander’s apeiron model is the most popular representative of pre-Socratic continuous approaches to matter. For the greek philosopher, apeiron refers to the notions of indefinite and unlimited, alluding explicitly to the origin (arché) of all forms of reality. Precisely because apeiron, as suggested by its etymology, is that which cannot be limited, it doesn’t have in itself any specific form, that is, it is not definable. It is therefore a continuous material substrata, vague and boundaryless, capable of supporting the opposites from which all the world’s differentiation emerges. Besides Bruno Latour’s ‘plasma’, described by its author as that unknown and material hinterland which is not yet formatted, measured or subjectified, one of the most popular contemporaneous elaborations of this apeiron’s holistic theory is Jane Bennett’s “throbbling whole”. For the American philosopher, objects would be “those swirls of matter, energy, and incipience that hold themselves together long enough to vie with the strivings of other objects, including the indeterminate momentum of the throbbing whole”, something that according to Harman “we already encountered, in germ, in the pre-Socratic apeiron”.[14] Beside pure formal continuities such as Alejandro Zaera’s Yokohama (2000) or François Roche Asphalt Spot (2002), we can find a similar holistic vocation in projects such as Neri Oxman’s BitMap Printing (2012), Mette Ramsgard Thomsen’s Slow Furl (2008), and Poletto-Pasquero’s Urban Algae Follies (2016). Its renovated notion of matter is usually referred to as behavioural matter, living matter, ecological matter, digital matter, expanded matter, data-driven matter or intelligent matter. 

Paradoxically, what is relevant in all these expressions is not the term matter, but its qualifier, which systematically refers to spatio-temporal formal arrangements rather than hylomorphic matter attributes, emphasising the relevance of form as identifier over matter. Nery Oxman’s “material ecology” is an emblematic example of this phenomena. Oxman defines this expression as “an emerging field in design denoting informed relations between products, buildings, systems and their environment”.[15] The architect uses the term “informed” referring to information and therefore alluding to matter’s inner structure. However, if “matter” is informed, it is no longer a homogeneous and amorphous substance, but it contains a digital or a physical structure that operates at every level of scale. Her project Bitmap Printing (2012) acts as a platform that intertwines between natural, human and algorithmic agents, whose activity has performative consequences rather than symbolic references. In this sense, given that the project is informed, acts as a platform and performs, it is hardly understandable why, under a hylomorphic scheme, we refer to them as specific configurations of matter rather than as a particular type of form.

However, these three projects, together with the work of authors such as Marcos Cruz, Phillip Beesley or Areti Markopoulou, introduce a pre-Socratic’s matter attribute absent in the hylomorphic scheme: matter’s condition of being ultimate. In particular, we can find this pre-Socratic’s matter attribute in the continuous version developed by Anaximander through the notion of apeiron. As we can see in projects such as the Hylozoic Garden (2010) by Philip Beesley, full relationality and complete interconnectedness are the basis of a systemic approach to architecture in which the conceptual idea of field articulates Delanda’s “continuous heterogeneity”. 

The project is based on the ancient belief that matter has life and should be understood, according to its author, as an active environment of processes rather than as an accumulation of objects. Unlike hylomorphic matter, the anti-maternalistic matter evoked by the Hylozoic Garden does not contain an Aristotelian pattern that provides structure to it, but is instead self-formed, that is, structured, active and heterogeneous. However, specific parts are always an emergence from an underlying holistic field, that is, a whole. Indeed, continuity is actually capable of producing objects, that is, continuity on one level creates episodic variation on the next that may be presented as discrete elements, but they are always dependent on this first gradual variation. Under this scheme, part-thinking is very limited because specificity is always a deduction from a privilege underlying substrata. Parts are then prevented from its autonomy, being instead exhausted in its participation as subsidiary members of a whole. As Daniel Koehler suggests, “departing from parts a preconceived whole or any kind of structure does not exist. Parts do not establish wholes, but measure quantities.”[16] And quantities, indeed, begin with individuals, that is, with discreteness.

However, the notion of “discreteness” needs differentiation: not all the interpretations of this term permit to understand its individuals as parts. In this sense, it is crucial to note that pre-Socratic philosophy articulates as well a type of materialism based on discreteness: beside the continuity emphasised by Anaximander’s apeiron, Democritus’s atomic model is the most popular representative of this discrete approach to matter. For the Greek philosopher, atoms are not just eternal and indivisible, but also homogeneous, that is, generic. Although atoms differ in form and size, its internal qualities are constant in all of them, producing difference only through its grouping modes. Atoms are then particles: generic individuals whose variable conglomerates produce the difference that we observe in the world. As Graham Harman affirms, this form of materialism is “based in ultimate material elements that are the root of everything and higher-level entities are merely secondary mystifications that partake of the real only insofar as they emerge from the ultimate material substrate.”[17] 

The atomic model is thus a reductionist model: the different specificities that conform the world are mere composites of a privileged and ultimate physical element. In opposition to the continuous form of materialism, the discrete atomic type is easily misunderstood when it comes to considering its part-thinking capacities due to a frequent confusion: that between “part” and “particle”. This association is especially present nowadays in architectural experimental design, particularly under the notion of “digital” and its inherent discrete nature. Computation’s power of today has been aligned with this position through the recognition that “designers use the power of today’s computation to notate reality as it appears at any chosen scale, without converting it into simplified and scalable mathematical formulas or laws.”[18] It assumes “the inherent discreteness of nature”,[19] where the abstract continuity of the spline doesn’t exist. However, this process of architectural discretisation needs differentiation in order to be understood in relation to the notion of part, defined here as an interactive and autonomous element which is not just countable (mathematically discrete) but also distinct (ontologically discrete). Within the contemporaneous discrete project, three groups of architectural approaches to the notion of part, together with a speculative proposition, need to be distinguished according to its relation with matter’s ultimate condition: topological parts, corpuscular parts, ecological parts and limital parts.

Topological Parts, Corpuscular Parts, Ecological Parts, Limital Parts

There is a first group of proposals in which parts are topological parts; in spite of the granular appearance of its architectural ensembles, its vocation is still derivative from the parametric project: the continuity of its splines has reduced its resolution through a process of “pixelisation”, but it still operates under the material logic of an ultimate field. The notion of topology should be read here under the umbrella of the Aristotelian concept of topos. While Plato’s term chora refers to a flat and neutral receptacle, the term topos refers to a variable and specific place. In contrast to the flat spaces of modernity, the three-dimensional variability of 1990s spaces produces topographic surfaces in which every point is singular. This results in “a constant modification of the space that leads to a changing reading of the place,”[20] implying the shift from Plato’s chora to Aristotle’s topos. Unlike the universal abstraction of the former, in the Physics, Aristotle “identifies the generic concept of space with another more empirical concept, that of ‘place’, always referred to with the term topos. In other words, Aristotle looks at space from the point of view of place. Every body occupies its specific place, and place is a fundamental and physical property of bodies.”[21]

This is very clear in the following text by the Stagirite: 

“Again, place (topos) belongs to the quantities which are continuous. For the parts of a body which join together at a common boundary occupy a certain place. Therefore, also the parts of place which are occupied by the several parts of the body join together at the same boundary at which the parts of the body do.”[22]

Aristotle defines topos as a continuous and three-dimensional underlying substratum, but above all as an empirical and localised substratum.

The rhizomatic twists associated with these projects and underpinned by the intensive use of computational tools seem to oppose the homogeneity of its parts. According to Peter Eisenman, “while Alberti’s notational systems transcribed a single design by a single author, computation has the capacity to produce multiple iterations that the designer must choose from.”[23] Computers function as generators of variability, a fact that seems to promote Eisenman’s inconsistent multiples, calling into question Alberti’s homogeneous spatiality. However, in spite of being countable and distinct, the constitution of the parts associated with projects such as BIG’s Serpentine’s Pavilion (2016) and The Mountain (2008) or Eisenman’s Berlin Memorial (2005) is reducible to one single formula or equation, that is, a consistent and calculable single medium (parametricism). Its discrete look is provided by a set of elements which are countable, distinct and interactive, but that cannot be read as parts because its autonomy is restricted for a twofold reason: both its distinction and position depend on an ultimate system of relations which is external to the logics of its individuals, evoking therefore apeiron’s type of materialism. In this sense, parts here should be read as components: the location and form of them is subordinated to the topological bending of a general surface, precluding any type of part’s autonomy.

There is a second group of experimental projects in which parts are corpuscular parts. In these parts architectural ensembles are formalised through countable and qualitatively identical corpusculi, that is, individual entities which are not systematised by any external and preconceived structure. Its advocates follow a path similar – even if this is not their conscious intention – to that of Walter Gropius, Mies van der Rohe and Le Corbusier when they freed themselves from the straitjackets of the symmetry characteristic of 19th century’s Beaux-Arts, championed by architects such as Henri Labrouste or Felix Duban. However, corpuscular parts differ from modern parts in the fact that they are formally identical in between them despite performing different functions. Mario Carpo relates some of this work with Kengo Kuma’s Yure Pavilion (2015) and GC Prostho Museum Research Center (2010) under the expression “particlised.”[24] The term relates to the non-figural, aggregational or atomised way of producing architecture, in which Kuma states that “each element needs to be relieved from contact or structure beforehand, and placed under free conditions.”[25] 

Experimental projects such as Bloom (2012) by Alisa Andrasek and José Sánchez or Flow (2016) by Calvin Fung and Victor Huynh participate as well in this process of “particalisation” by relying on an ultimate, generic and privileged element: in opposition to modernist assemblies and in resonance with some of the early work of Miguel Fisac, “the buildings blocks are not predefined, geometric types – like columns or slabs – that only operate for a specific function,”[26] and unlike parametricism they do not derive from a predefined whole. 

Instead, the particle’s specific function is an emergent attribute of its interaction. In this sense, what gives specificity to these generic particles is not an a priori and fixed structure as modernism, but a posteriori and evolving relationality with the world. This is problematic with the requirement of autonomy demanded by parts for two reasons. On the one side, if part’s specificity is exhausted with its outer relationality, its nomos is coming from outside and we are therefore in Kant’s heteronomy rather than autonomy. On the other side, if parts are originally generic, they refer to an original standard type which is holistic precisely because it is shared by default by all its members. The fact that specificity is an emergent property in which parts are defined exclusively by their relationships with other parts has been interpreted as their emancipation with respect to the notion of whole. Timothy Morton describes this type of relational process as “simply the last philosophical reflex of modernity”.[27] 

Indeed, the instrumental reason characteristic of modernity is still behind this type of operation because emergent processes are teleological processes. “Emergence is always emergence for”[28] because there is always a holistic target that subjugates the parts to the benefit of the whole. As such, we are not dealing with a mereology of parts, but rather a mereology of particles: each element is not an incomplete piece that is unique in its identity and therefore irreducible (part), but rather a generic ultimate element that becomes specific at the price of being relationally dissolved into the whole of which it belongs (particle). Its being is defined precisely by the relationships it establishes with other elements, and those relationships are the way they are because they are beneficial to a whole. 

Timothy Morton affirms that moving past modernity implies the need for a “philosophy of sparkling unicities; quantised units that are irreducible to their parts or to some larger whole; sharp, specific units that are not dependent on an observer to make them real.”[29] Despite their local character, the relations that regulate individuals undervalue the parts on the one hand and overvalue the whole on the other. They undervalue the parts by fully determining their specific behaviour according to external factors, its original character being generic. They overestimate the whole by varying individual’s specific behaviour according to the benefit of the whole. This position facilitates the emergence of a framework in which bits are associated literally with parts and the act of counting is frequently confused with an act of discretisation. It is then crucial to differentiate mathematical discreteness from ontological discreteness. While the first one alludes to countable elements (particles), the second one alludes to distinct elements (parts). 

Figure 4 – Mathematical Discreteness, Ontological Discreteness, 2020. Image: Jordi Vivaldi, 2020.

The lack of distinction characteristic of generic particles prevents its approach through an exercise of architectural “part-thinking”. Instead, we are confronted with the discrete type of materialism elaborated by pre-Socratic philosophy. Although its ultimate condition permits individual’s participation, it ignores its autonomy’s requirement for part-thinking under a masked heteronomy, which provides specificity to generic particles at the cost of its exhaustion under external relationality.

There is a third group of recent experimental architectural proposals in which parts are ecological parts; they operate as a set of distinct objects that intertwine with one another under the gravitational field of different systems. The notion of ecology should be interpreted here in keeping with the etymology of the Greek term oikos. Its meaning is that of “house” understood as the set of people and objects forming a domestic space and being regulated by the economy (the nomos of the oikos).  

However, the term oikos has traditionally been associated with another very similar one: oikia. Both have been translated as “house”, in the most general sense of the word. Nonetheless, Xenophon outlines a distinction[30] that, although not entirely accepted by all Greek authors, is very useful in approaching the question at hand. The Greek philosopher asserts that the expression oikos refers to a house in the strict sense of a place of residence, whereas the expression oikia denotes not only the house but also the property it contains and its inhabitants. 

Based on this distinction, the word oikia would refer to a collection of elements of different natures and sizes whose coexistence and eventual interlacement would give rise to a specific spatial conception. It is formed not only by the house itself, but also by the property it contains (animals, instruments, jewellery, furniture, etc.) and its inhabitants. It would therefore be a large composite of objects whose eventual interlacements over time would form what Xenophon defines as domestic space. In that sense, these spaces not only contain and are contained by other spaces simultaneously, they also never appear as completely closed elements, despite remaining identifiable and extractable. Oikia is then not produced from a passive Platonic receptacle (chora) or an active Aristotelian substrate (topos); it is constructed instead from the multi-scalar co-existence of various groups and subgroups of systems. The ecological parts characteristic of this branch of experimental architectural projects represent, in different ways, a departure from the materialism analysed in previous cases. They find an example avant la lettre in the work of Jean Renaudie, particularly in his two housing complexes in Ivry sur Seine (1975) and Givors (1974).

Although not all parts fully coincide with the definition provided here, the discreteness of the projects operates with autonomous discrete entities that cannot be interpreted under a materialistic framework; there is no ultimate element acting as an underlying substrata (continuous or discrete) to which entities can be reduced. However, as we have seen, the notion of ecology implies the presence of oikia, that is, a house, a common denominator whose presence can be traced in these projects by a formal homogeneity that traverses the whole composition.

We can find a wide range of experimental architectural formal strategies working in this direction. Daniel Kohler’s Hyper-Nollie (2019) develops a complicit discreteness with more than 40 different parts that are always cooperative and incomplete, never single entities, never fully defined, never identical. However, the continuous connection of its spaces and the fact that each one of them is accessible from each part seem to formally evoke the logics of a relational field, particularly through the homogeneous granularity revealed by a general overview. Nevertheless, the project’s tension between the distinct discreteness of its close view and the texturised continuity of its far-view precludes any attempt to simply reduce its parts to an underlying material substrata: each part positions its own context’s interpretation through a complex balance in between identity (inherent distinction) and relationality (local complicities).

Although its assumption of the voxel as a standard unit and its complicity with Christopher Alexander’s notion of structure, Jose Sánchez’s Block’hood (2016) tends as well to avoid the possibility of any full material reductionism to any ultimate being. In spite of its underlying 3D grid, the project provides each voxel with a specific performative behaviour whose specificity is not merely underpinned by relationality, but is partly inherent to its constitution. In this sense, each unit approaches our definition of part because despite its underlying common framework, voxel’s singularity cannot be merely reduced to it or to its relations. Rasa Navasaityte’s Urban Interiorities (2015) approaches the notion of part through a recursive structure of groups inside groups: there is not any ultimate element from which the rest of compositions can be derived, but a recursive process.This partly acts as a holistic system of form production, at the same time permitting the presence of distinction beyond countability. 

These projects represent the different nuances of a part: they operate through the tension established in between part’s autonomy and part’s participation, e.g. the part’s capacity to be inherently distinct and at the same time the part’s capacity to retain something in common with other parts in order to permit local and ephemeral complicities. This type of mereology resonates with what Levi Bryant has defined as a “strange mereology”: “one object is simultaneously part of another object and an independent object in its own right.”[31] Indeed, on the one side, the parts that we have seen in this last group of projects are autonomous beings in the world that cannot be reduced to other parts. But at the same time, parts are composed by other parts, compose other parts, and relate with other parts. In synthesis, part-thinking demands parts execute what seems to be a paradox: its constitution as a countable and distinct entity that is both independent and relational.

We could synthesise the different approaches towards the definition of part presented here as follows: the first group of projects, constituted by what we have defined as topological parts, leaves aside part’s autonomy in favour of an underlying field of relations. The second group, whose parts are defined as corpuscular parts, emphasises part’s countability (mathematical discreteness) instead of part’s inherent distinction (ontological discreteness). The third group, composed by ecological parts, still retains a vague remainder of a general background (oikia) that vectorises part’s distribution. In all of them, matter’s ultimate condition is still present, although in a blurry and definitely weakened version, particularly in the last one. However, we could briefly speculate with a fourth group of architectural parts, associated with the notion of limit, that would emerge from the radical limitation of matter’s ultimate condition.

Figure 5 – Topological, Corpsular, Ecological and Limital Parts, 2020. Image: Jordi Vivaldi, 2020.

The notion of limit is at the core of architecture. If we understand the architectural practice as the production of interiorities, that is, as the production of spaces within spaces, the idea of a border distinguishing them is decisive. In this sense, the etymology of the term “temple” is particularly revealing: its root “-tem”, present also in the terms témenos, templum, and “time”, indicates the idea of a cutout, a demarcation, a frontier, a limit instrumentalised in order to separate the sacred realm of the gods from the profane territory of humans. In ancient Rome, the construction of a temple began with the cumtemplatio, the contemplative observation of a demarcated zone of the sky by the augurs. Through the attentive observation of birds, the sun and the clouds’ trajectories within the selected celestial area, the augurs interpreted the auspices of the city that was about to be founded. Once the observation was completed, the demarcated zone of the sky was projected onto the ground in order to trace the contours of the future temple, the germinal cell of the coming city. Cumtemplatio was thus cum-tem-platio: the tracing of the limits through which the cosmos took on meaning and signification by being projected onto the earth and establishing the ambit in which the humans could purposively inhabit the world. Thus, the temple instrumentalised the limit not just as a border between what is sacred and what is profane, that is, between inside and outside, but also as a space in itself, as a frontier territory mediating between the celestial realm of the gods and the terrestrial realm of humanity.

The spatialised register of the limit evoked by the temple and aligned with notions such as the Christian limbo or the Roman limes, lays the foundation for the type of immaterialist parts hypothesised here with the expression limital parts. They expand the decreasingly shy immaterialism present in topological parts, corpuscular parts and ecological parts by limiting the reduction to any sort of matter’s ultimate condition. In order to do so, limital parts are liminal, limited, and limitrophe, three decisive attributes aligned with supercomputation’s capacity to avoid parametric reductionism.

First, limital parts are liminal, that is, they are the locus of junction and disjunction. The notion of liminality should be read under its instrumentalisation by Arnold van Gennep and Victor Turner: the limit is not the Euclidean divider line that is at the core of the Modern Movement’s programmatic zonification, but, the limit is, in its anthropological register, the frontier territory that in a rite of passage mediates between the old and new identity of its participants. Parts’s liminality constitutes a daimonic space whose nature is that of “differential sameness and autoreferential difference,”[32] if the limit is in itself and by itself internal differentiation, if in its re-flection the limit separates and divides, then limital parts should necessarily join and disjoin, or, more accurately, limital parts should join what they disjoin. The liminality of limital parts does not mean that its composition is simply the random juxtaposition of a litany of solipsistic monades: in their symbiotic intertwinings, the different liminal parts establish clusters and sub-clusters of performative transfers that are constantly sewing and resewing the limit’s limits: their operativity is not always structured by harmonic consensus, but they engage in constant resistance and deviation. They produce spontaneous symbiotic interlacements that overlap without any preconceived agreement and certainly not without décalages, displacements and misfits.

Second, limital parts are limited, that is, they are distinct and determined. The notion of limitation should be read under its Hegelian instrumentalisation: “The limit is the essentiality of something, it is its determination.”[33] Thus, to limit means to define; the latin term definire signifies to trace the borders of something in order to separate it from its neighbours. Definire is the establishment of finis, ends. However, the term finis should not be read here only under the light of its topological or chronological sense, but it should also be approached in its ontological register: to define means to specify the qualities of a part that make a part this part and not that part, avoiding its reduction to any ultimate material substrata. It traces an ontological contour in order to limit the part’s infinite possible variability. A limited part refers thus to a distinct part; it is determined, but not predetermined, that is, it is not determined avant la lettre. It contrasts with what is open, flexible and generic; in a context where the power of today’s supercomputation makes it possible to notate the inherent discreteness of reality, it is no more necessary to design with simplified spatial formulas (fields), or repetitive spatial blocks (particles). Today’s computational power applied to architectural design allows an emancipation from reductive laws, whose standardisation is at the core of the material remanences of topological parts, corpuscular parts and ecological parts. Thus, rather than formulative and open parts, the unprecedented power unfolded by supercomputation lets us operate with massive sets and sub-sets of distinct parts. The limited condition of limital parts does not align with the notion of the generic, nor with derivative concepts such as flexibility, adaptability or resilience, so common in the three previous groups of architectural parts. Thus, rather than flexible, limital parts are plastic (plastiquer, plastiquage, associated in French to the notion of explosion): they vary, but at the price of gaining a new specificity and cancelling the previous one.

Third, limital parts are limitrophe, that is, they are foliated. The notion of limitrophy should be read in light of its instrumentalisation by Jacques Derrida. Rather than effacing or ignoring the limit, Derrida attempts, through his use of the term “limitrophy”, “to multiply its figures, to complicate, thicken, delinearize, fold, and divide the line precisely by making it increase and multiply.”[34] Limital parts are thus thickened, which is the literal sense of the Greek term trepho, that is, to nurture. Under this umbrella, a limitrophe part is not a solipsistic monade or a fragment referring to an absent whole. Limital parts produce inconsistent multiplicities by acquiring a foliated consistency and becoming an edgy, plural and repeatedly folded frontier. Limital parts shouldn’t orchestrate thus an abyssal and discontinuous limit: the latter does not form the single and indivisible line characteristic of modernity, rather, it produces “more than one internally divided line.”[35] Thus, limital parts grow and multiply into a plethora of edges. Precisely because of their liminal, limited and limitrophe condition, limital parts are immaterialist: they are not reducible to one, as is the case, with decreasing intensity, of topological parts, corpuscular parts or ecological parts. 

Ending Considerations

Avoiding matter’s ultimate condition requires understanding form as a spatio-temporal structure that operates at every level of scale. It demands the assumption that there is always a form beyond any given form, avoiding any continuous (field) or discrete (particle) ultimate background in which parts could be reduced. In this sense and as Graham Harman affirms, “although what is admirable in materialism is its sense that any visible situation contains a deeper surplus able to subvert or surprise it,”[36] the kind of formalism approached here does not deny this surplus, it merely states that this surplus is also formed.

The impossibility of conjugating matter’s ultimate condition with a radical part-thinking would suggest a pan-formalism based on a Matryoshka logic, a multiscalar recursivity that doesn’t rely on an ultimate and maternal underlying substrata. Under this framework and building on the German and Russian formalist traditions later developed by figures such as Colin Rowe, Alan Colquhoun, Alexander Tzonis or Liane Lefaivre, the formalism that could emerge from these statements shouldn’t be understood in the sense that there is no excess beneath the architectural forms that are given, rather, in the sense that “the excess is itself always formed.”[37] 

The constant and multiscalar presence of form and the avoidance of any ultimate substrata are posited as the two conditions that a radical part-thinking would require; they represent the only way in which the notion of part can be understood in its full radicality, that is, as an interactive and autonomous element which is not just countable (mathematically discrete) but also distinct (ontologically discrete). As we have seen, this approach is incompatible with matter’s understanding: despite matter’s revival has paradoxically imported all the attributes associated with the hylomorphic understanding of form, the re-introduction of pre-Socratic’s ultimate condition represents the clandestine re-introduction of the notion of whole and therefore an unsurpassable obstacle for part-thinking.

References

[1] G. Harman, “Materialism Is Not the Solution”, The Nordic Journal of Aesthetics, 47 (2014), 95.

[2] E. Prieto, La vida de la materia (Madrid: Ediciones Asimetricas, 2018), 28-102.

[3] E. Sadin, La humanidad aumentada (Buenos Aires: La Caja Negra, 2013), 152.

[4] M. Carpo, The Second Digital Turn: Design Beyond Intelligence (Cambrige: MIT Press, 2017), 71.

[5] Alberti, Re-Aedificatoria, (Madrid: Ediciones Asimétricas, 2012), 21.

[6] G. Semper, The Four Elements of Architecture and Other Writings (Cambridge: Cambridge University Press, 1969), 45-73.

[7] E. Prieto, La vida de la materia (Madrid: Ediciones Asimétricas, 2018), 28-102.

[8] M. Delanda, “Interview with Manuel Delanda”, New Materialism: Interviews and Cartographies, 9. 

[9] K. Barad, “Interview with Keren Barad”, New Materialism: Interviews and Cartographies, ed. Rick Dolphijn & Iris van der Tuin (London: Open Humanities Press, 2012), 59.

[10] E. Sadin, La humanidad aumentada (Buenos Aires: La Caja Negra, 2013), 152.

[11] K. Easterling, Medium Design (Kindle Edition: Strelka Press, 2018).

[12] B. Bratton, The Stack: On Software and Sovereignty (London: The MIT Press, 2016).

[13] G. Harman, “Materialism is Not the Solution”, The Nordic Journal of Aesthetics, 47 (2014), 100.

[14] Ibid, 98.

[15] N. Oxman, “Material Ecology”, Proceedings of the 32nd Annual Conference of the Association for Computer Aided Design in Architecture ACADIA (2012), 19-20.

[16] D. Koehler. Large City Architecture: The Fourth Part (London: 2018), 19.

[17] G. Harman, “Materialism is Not the Solution” The Nordic Journal of Aesthetics, 47 (2014), 100.

[18] M. Carpo, The Second Digital Turn: Design Beyond Intelligence, (Cambridge: MIT Press, 2017), 71.

[19] Ibid.

[20] A. Zaera, “Nuevas topografías. La reformulación del suelo,” Otra mirada: posiciones contra crónicas, ed. M. Gausa and R. Devesa (Barcelona: Gustavo Gili, 2010), 116-17.

[21] J. M. Montaner, La modernidad superada (Barcelona: Gustavo Gili, 2011), 32.

[22] Aristotle, Physis, trans. W.A. Pickard (Cambridge: The Internet Classics Archive, 1994).

[23] P. Eisenman, “Brief Advanced Design Studio”, last modified October 2014, https://www.architecture.yale.edu/courses/advanced-design-studio-eisenman-0#_ftn3.

[24] M. Carpo, “Particalised”, Architectural Design, 89, 2 (2019), 86-93.

[25] K. Kuma, Materials, Structures, Details (Basel: Birkhäusser, 2004), 14.

[26] G. Retsin, “Bits and Pieces” Architectural Design, 89, 2 (2019), 43.

[27] T. Morton, Hyperobjects, (Minneapolis: University of Minnesota Press, 2013), 119.

[28] Ibid.

[29] Ibid, 41.

[30] K. Algra, Concepts of Space in Greek Thought (London: Brill, 1995), 32.

[31] L. Bryant, The Democracy of Objects (Cambridge: MIT Press, 2017), 215.

[32] E. Trías, Los límites del Mundo (Barcelona: Ariel Filosofía, 1985), 121.

[33] G. W. F. Hegel, The Science of Logic (Cambridge: Heidelberg Writings, 1996), 249.

[34] J. Derrida, “The Animal That Therefore I am (More to Follow)” trans. David Wills, Critical Inquiry, 28, 2 (2002), 398.

[35] Ibid., 398.

[36] G. Harman, “Materialism Is Not the Solution”, The Nordic Journal of Aesthetics, 47 (2014), 100.

[37] Ibid.

Suggest a Tag for this Article
Close-up of the guifi.net, available nodes online. Image capture 26 September 2020. Image credit: David Rozas.
Affordances of Decentralised Technologies for Commons-based Governance of Shared Technical Infrastructure
Architecture Theory, Blockchain, Commons, Decentralisation, Mereologies, Mereology
David Rozas
Universidad Complutense de Madrid
drozas@ucm.es
Add to Issue
Read Article: 3832 Words

In this article I will illustrate affordances of decentralised technologies in the context of commons governance. My aim is to summarise the conversation around the lecture “When Ostrom Meets Blockchain: Exploring the Potentials of Blockchain for Commons Governance” I gave in the Mereologies Open Seminar organised by The Bartlett School of Architecture at University College London on 25th April 2019. I will also extend the conversation by providing a concrete example of such affordances in the context of a community network.

What is Blockchain? Three Key Concepts around Decentralised Technologies

In 2008, an anonymous paper presented Bitcoin: the first cryptocurrency based purely on a peer-to-peer system.[1] For the first time, no third parties were necessary to solve problems such as double-spending, thanks to cryptography. The solution was achieved through the introduction of a data structure known as a blockchain. In simple terms, a blockchain can be understood as a distributed ledger. Distributed refers to a technical property of a system in which certain components are located on different computers connected through a network. The blockchain, in this sense, can be thought of as a “decentralised book” in which agreed transactions can be stored in a set of distributed computers. Data, such as the history of monetary exchanges generated by using cryptocurrencies, can be stored in a blockchain. The key aspect resides in the fact that there is no need to trust a third party, such as a bank server, to store that information.

Nakamoto’s article opened what is considered to be the first generation of blockchain technologies.[2] This generation, up to approximately 2013, includes Bitcoin and a number of crypto-currencies that appeared after it. The second generation, approximately from 2014 onwards, is the extension of these blockchains with capabilities beyond currencies in the form of automatic agreements or smart contracts.[3] Smart contracts can be understood as distributed applications which encode clauses that are automatically enforced and executed without the need for a central authority. They can be employed, for example, to enable the execution of code to provide certifications, such as obtaining a diploma or a registry of lands, according to previously mutually agreed rules. Again, the novel aspect here is the fact that the execution of such rules, in the form of computer instructions, is distributed across a large number of computers without the need of a central point of control.

Complex sets of smart contracts can be developed to make it possible for multiple parties to interact with each other. This fostered the emergence of the last of the concepts I will introduce around decentralised technologies: Decentralised Autonomous Organisations (DAO). A DAO is a self-governed organisation in which interactions between the members of the organisation are mediated by the rules embedded in the DAO code. These rules are sets of smart contracts that encode such interactions. The rules embedded in the code are automatically enforced by the underlying technology, the blockchain, in a decentralised manner. DAOs could, for example, hire people to carry out certain tasks or compensate them for undertaking certain action. Overall, this can be understood as analogous to a legal organisation, with legal documents – bylaws – which define the rules of interaction among members. The development of DAOs has been, unsurprisingly, significantly popular around financial services.[4] However, DAOs could be used to provide a wide variety of services or management of resources in a more diverse range of areas. A more artistic example of a DAO is the Plantoid project,[5] a sculpture of a plant, which can hire artists to physically modify the sculpture itself according to the rules collectively agreed in the smart contracts encoded in it.

All of these potentials of decentralised technologies represent an emerging research field. Together with other colleagues of the EU project P2PModels,[6] we are exploring some of these potentials and limitations in the context of the collaborative economy and, more precisely, on some of the models emerging around Commons-Based Peer Production.

Collaborative Economy and Commons-Based Peer Production

The collaborative economy is a growing socio-economic phenomenon in which individuals produce, exchange and consume services and goods, coordinating through online software platforms. It is an umbrella concept that encompasses different initiatives and significantly different forms are emerging; there are models where large corporations control the platform, thus ensuring its technologies and the knowledge held therein are proprietary and closed. Uber, a riding service, and AirBnB, a short-term lettings service, are perhaps the most well-known examples of such initiatives. They differ from models that revolve around Commons-Based Peer Production (CBPP), where individuals produce public goods by dispensing with hierarchical corporate structures and cooperating with their peers.[7] In these models, participants of the community govern the assets, freely sharing and developing technologies.[8] Some of the most well-known examples of the initiatives around such commons-based models are Wikipedia and GNU/Linux, a Free/Libre Open Source Software (FLOSS) operating system. Commons-based models of the collaborative economy are, however, extending to areas as broad as open science, urban commons, community networks, peer funding and open design.[9]

Three main characteristics are salient in the literature on CBPP.[10] Firstly, CBPP is marked by decentralisation, since authority resides in individual agents rather than a central organiser. Secondly, it is commons-based since CBPP communities make frequent use of common resources. These resources can be material, such as in the case of 3D printers shared in small-scale workshops known as Fab Labs; or immaterial, such as the wiki pages of Wikipedia or the source code in a FLOSS project. Thirdly, non-monetary motivations are prevalent in the community. These motivations are, however, commonly intertwined with extrinsic motivations resulting in a wide spectrum of forms of value operating in CBPP communities,[11] beyond monetary value.[12]

Guifi.net: An Example of a CBPP Community in Action

In order to extend the discussion of the affordances of decentralised technologies in CBPP, I will employ Guifi.net as an illustrative example. Guifi.net[13] is a community network: a participatory project whose goal is to create a free, open and neutral telecommunications network to provide access to the Internet. If you are reading this article online, you might be accessing it through a commercial Internet Service Provider. These are the companies which control the technical infrastructure you are using to connect to the Internet. They manage this infrastructure as a private good. The Guifi.net project, instead, manages this infrastructure as a commons. In other words, Guifi.net is organised around a CBPP model,[14] in which the network infrastructure is governed as a common good. Over the past 16 years, participants of Guifi.net have developed communitarian rules, legal licenses, technological tools and protocols which are constantly negotiated and implemented by the participants.

I have chosen to discuss the potentialities of blockchain drawing on Guifi.net, a community network, for two main reasons. Firstly, the most relevant type of commons governed in this case is shared infrastructure, such as fibre optic and routers. The governance of rival material goods, in contrast to the commons governance of non-rival goods such as source code or wiki pages, better matches the scope of the conversations which emerged during the symposium around architecture of the commons and the role played by participatory platforms.[15] Secondly, Guifi.net provides a large and complex case of governance of shared infrastructure. The growth experienced by Guifi.net’s infrastructure and community since the first pair of nodes were connected in a rural region of Catalonia in 2004 is significant. In their study of the evolution of governance in Guifi.net, Baig et al. reported a network infrastructure consisting of more than 28,500 operational nodes which cover a total length of around 50,000 km of links that are connected to the global Internet. This study refers to the period 2005–2015.[16] The latest statistics reported by Guifi.net state that there are more than 35,000 operational nodes and 63,000 km of links.[17] Beyond the infrastructure, the degree of participation in the community is also significant: more than 13,000 registered participants up to 2015, according to the aforementioned study, and more than 50,000 users of this community network connect on a day to day basis, as reported by the community at present.[18] Thus, Guifi.net provides a suitable scenario for the analysis of the affordances of decentralised technologies for commons governance.

Ostrom’s Principles and Affordances of Decentralised Technologies for Commons Governance

How do communities of peers manage to successfully govern common resources? The study of the organisational aspects of how common goods might be governed was traditionally focussed on the study of natural resources. This commons dilemma was explored by Hardin in his influential article “The Tragedy of the Commons”, whose ideas became the dominant view. In this article, Hardin states how resources shared by individuals acting as homo economicus (out of self-interest in order to maximise their own benefit) results in the depletion of the commons. The individuals’ interests enter into conflict with the group’s, and because they act independently according to their short-term interests, the result of the collective action depletes the commons.[19] As a consequence, in order to avoid this logic – “If I do not use it, someone else will”, which is not sustainable – it was necessary to manage these commons through either private ownership or centralised public administration. 

Later on, Nobel laureate researcher Elinor Ostrom questioned and revisited “The Tragedy of the Commons”. In her work, she showed how under certain conditions commons can indeed be managed in a sustainable way by local communities of peers. Her approach took into account that individual agents do not operate in isolation, nor are they driven solely by self interest. Instead, she argued that communities communicate to build processes and rules, with different degrees of explicitation, that ensure their sustainability.[20] This hypothesis was supported by a meta-analysis of a wide range of case studies,[21] and has been confirmed in subsequent research.[22] As part of this work, she identified a set of principles for the successful management of these commons,[23] which has also been subsequently applied to the study of collaborative communities whose work is mediated by digital platforms, such as Wikipedia and FLOSS communities:[24]

1. Clearly defined community boundaries: in order to define who has rights and privileges within the community.

2. Congruence between rules and local conditions: the rules that govern behaviour or commons use in a community should be flexible and based on local conditions that may change over time. These rules should be intimately associated with the commons, rather than relying on a “one-size-fits-all” regulation.

3. Collective choice arrangements: in order to best accomplish congruence (with principle number 2), people who are affected by these rules should be able to participate in their modification, and the costs of alteration should be kept low.

4. Monitoring: some individuals within the community act as monitors of behaviour in accordance with the rules derived from collective choice arrangements, and they should be accountable to the rest of the community.

5. Graduated sanctions: community members actively monitor and sanction one another when behaviour is found to conflict with community rules. Sanctions against members who violate the rules are aligned with the perceived severity of the infraction.

6. Conflict resolution mechanisms: members of the community should have access to low-cost spaces to resolve conflicts.

7. Local enforcement of local rules: local jurisdiction to create and enforce rules should be recognised by higher authorities.

8. Multiple layers of nested enterprises: by forming multiple nested layers of organisation, communities can address issues that affect resource management differently at both broader and local levels.

What kind of affordances do decentralised technologies offer in the context of commons governance and, more concretely, with regards to Ostrom’s principles? Together with other colleagues,[25] we have identified six potential affordances to be further explored. 

Firstly, tokenisation. This refers to the process of transforming the rights to perform an action on an asset into a transferable data element (named token) on the blockchain. For example, tokens can be employed to provide authorisation to access a certain shared resource. Tokens may also be used to represent equity, decision-making power, property ownership or labour certificates.[26]

Secondly, self-enforcement and formalisation of rules. These affordances refer to the process of embedding organisational rules in the form of smart contracts. As a result, there is an affordance for the self-enforcement of communitarian rules, such as those which regulate monitoring and graduated sanctions, as reflected in Ostrom’s principles 4 and 5. This encoding of rules also implies a formalisation, since blockchain technologies require these rules to be defined in ways that are unambiguously understood by machines. In other words, the inherent process of explicitation of rules related to the use of distributed technologies also provides opportunities to make these rules more available and visible for discussion, as noted in Ostrom’s principle 2.

Thirdly, autonomous automatisation: the process of defining complex sets of smart contracts which may be set up in such a way as to make it possible for multiple parties to interact with each other without human interaction. This is analogous to software communicating with other software today, but in a decentralised manner. DAOs are an example of autonomous automatisation as they could be self-sufficient to a certain extent. For instance, they could charge users for their services.[27]

Fourthly, decentralised technologies offer an affordance for the decentralisation of power over the infrastructure. In other words, they can facilitate processes of communalising the ownership and control of the technological artefacts employed by the community. They do this through the decentralisation of the infrastructure they rely on, such as collaboration platforms employed for coordination.

Fifthly, transparency: for the opening of organisational processes and the associated data, by relying on the persistency and immutability properties of blockchain technologies.

Finally, decentralised technologies can facilitate processes of codification of a certain degree of trust into systems which facilitate agreements between agents without requiring a third party. Figure 1 below provides a summary of the relationships between Elinor Ostrom’s principles and the aforementioned affordances.[28]

Figure 1 – Summary of the relationships between the identified affordances of blockchain technologies for governance and Ostrom’s principles (Ostrom, 1990). Image credit:, identified by Rozas et al., 2018.

These congruences allow us to describe the impact that blockchain technologies could have on governance processes in these communities. These decentralised technologies could facilitate coordination, help to scale up commons governance or even be useful to share agreements and different forms of value amongst various communities in interoperable ways, as shown by Pazaitis et al..[29] An example of how such affordances might be explored in the context of CBPP can be found in community networks such as Guifi.net.

A DAO for Commons Governance of Shared Technical Infrastructure

Would it be possible to build a DAO that might help to coordinate collaboration and scale up cooperative practices, in line with Ostrom’s principles, in a community network such as Guifi.net? First of all, we need to identify the relationship between Ostrom’s principles and Guifi.net. We can find, indeed, a wide exploration of the relationship between Ostrom’s principles and the evolution in the self-organisational processes of Guifi.net in the work of Baig et al..[30] They document in detail how Guifi.net governs the infrastructure as a commons drawing on these principles, and provide a detailed analysis of the different components of the commons governance of the shared infrastructure in Guifi.net. Secondly, we need to define an initial point of analysis, and tentative interventions, in the form of one of the components of this form of commons governance. From all of these components, I will place the focus of analysis on the economic compensation system. The reason for selecting this system is twofold. On the one hand, it reflects the complexity behind commons governance and, thus, allows us to illustrate the aforementioned principles in greater depth. Secondly, it is an illustrative example of the potential of blockchain, as we shall see, to automatise and scale up various cooperative processes.

The economic compensation system of Guifi.net was designed as a mechanism to compensate imbalances in the uses of the shared infrastructure. Professional operators, for example, are requested to declare the expenditures and investments in the network. In alignment with Ostrom’s principle number 4, the use, expenditure and investments of operators are monitored, in this case by the most formal institution which has emerged in Guifi.net: the Guifi.net Foundation. The Foundation is a legal organisation with the goal to protect the shared infrastructure and monitor compliance with the rules agreed by the members of the community. The community boundaries, as in Ostrom’s principle number 1, are clearly defined and include several stakeholders.[31] Different degrees of commitment with the commons were defined as collective choice arrangements (principle number 3). These rules are, however, open to discussion through periodic meetings organised regionally, and adapted to the local conditions, in congruence with principle number 2. If any participant, such as an operator, misuses the resources or does not fulfill the principles, the individual is subject to graduated sanctions,[32] in alignment with principle number 5. As part of the compensation system, compensation meetups are organised locally to cope with conflict resolution, in congruence with principle 6. Principles 6 and 7 are also clearly reflected in the evolution of the governance of Guifi.net, although they are more closely associated with scalability.[33] 

The compensation DAO could be formed by a set of local DAOs, whose rules are defined and modifiable exclusively by participants holding a token which demonstrates they belong to this node. These local DAOs could be deployed from templates, and could be modified at any point as a result of a discussion at the aforementioned periodic meetings held by local nodes and in congruence with the local conditions. Among the rules of the smart contracts composing these DAOs, participants may decide to define the different factors that are considered when discussing the local compensation system arrangements, as well as graduated sanctions in case of misuse of the common goods. These rules might be copied and adapted by some of the nodes facilitating the extension of the collaborative practices.

Some of the settings of these local DAOs could be dependent on a federal compensation DAO that defines general aspects. A mapping of the current logic could consist of reaching a certain degree of consensus between the participants in all of the nodes, but having this process approved by the members of the Foundation, who would hold a specific token.  Examples of general aspects regulated by the federal DAO are the levels of commitment towards the commons of each operator, which is currently evaluated and monitored manually by the Foundation. General aspects such as this could be automatised in several ways therefore moving from manual assignations by the Foundation, as is currently the case, to automatically assigned tokens depending on the communitarian activities tracked in the platform. This is an example of a possible intervention to automatise certain collaborative practices assuming the current structure. Figure 1 below provides an overview of a preliminary design of a DAO for a compensation system mapping the current logics. 

Figure 2 – A proposal of a simple compensation DAO. The green arrows represent the extension of practices between local DAOs, including new nodes such as number 5. Black arrows represent the interactions between the local DAOs and the federal DAO, in congruence with Ostrom’s principle 8. Image credit: Rozas, et al, 2018.

More disruptive tentative interventions could consist of the implementation of more horizontal governance logics which allow modifications of the rules at a federal level or to transform the rules that regulate the monitoring. These interventions, however, should be carefully co-designed together with those who participate in the day-to-day of these collectives. Our approach states that the development of decentralised tools which support commons governance should be undertaken as a gradual process to construct situated technology, with an awareness of the cultural context and aiming to incorporate particular social practices into the design of these decentralised tools. 

This basic example of a DAO illustrates, on the one hand, the relationship with Ostrom’s principles: monitoring mechanisms, local collective choice arrangements, graduated sanctions and clear boundaries. These principles are sustained by the aforementioned affordances of blockchain for commons governance. For example, tokenisation with regards to providing permission as to who has the ability to participate in the choices locally and at a federal level and how, as well as the certification of the level of commitment to the commons; monitoring of the expenditures and reimbursements through the transparency provided by the blockchain; self-enforcement, formalisation and automatisation of the communitarian rules in the form of smart contracts. Another, more general, example of this is the increment in the degree of decentralisation of power over the platform because of the inherent decentralised properties of the technology itself. In this way, this could result in a partial shift of power over the platform from the Foundation towards the different nodes formed by the participants. Furthermore, as discussed, the fact that such rules are encoded in the form of configurations of smart contracts could facilitate the extension of practices and the development of new nodes; or even the deployment of alternative networks capable of operating as the former network, and reusing and adapting the encoded rules of the community while still using the shared infrastructure. Overall, further research of the role of decentralised technologies in commons governance offers, in this respect, a promising field of experimentation and exploration of the potential scalability of cooperative dynamics.

Discussion and Concluding Remarks

In this article I provided an overview and discussed an example of the affordances of blockchain technologies for commons governance. Concretely, I described such potentialities drawing on the example of a DAO to automatise some of the collaborative processes surrounding the compensation system of a community network: Guifi.net. Throughout this example, I aimed to illustrate, in more detail, the affordances of blockchain for commons governance which I presented during the symposium. The aim of this example is to illustrate how blockchain may facilitate the extension and scaling up of the cooperation practices of commons governance. Further explorations, more closely related to the architecture field, could explore the discussed affordances for commons governance with discrete design approaches that provide participatory frameworks for collective production.[34] In this respect, decentralised technologies offer opportunities of exploration to tackle challenges such as those identified by Sánchez[35] to define ways to allocate ownership, authorship and distribution of value without falling into extractivist practices.

A better understanding of the capabilities of blockchain technologies for commons governance will require, however, further empirical research. Examples of research questions which need to be addressed are those with regards to the boundaries of the discussed affordances. For example, with regards to tokenisation and formalisation of rules: which aspects should remain in/off the blockchain, or furthermore completely in/out of code?

Overall, CBPP communities provide radically differing values and practices when compared with those in markets. In this respect, the study of the potentialities and limitations of blockchain technologies in the context of the governance of CBPP communities offers an inspiring opportunity to take further steps on a research journey that has only just begun.

References

[1] S. Nakamoto,“Bitcoin: A Peer-to-Peer Electronic Cash System” (2008).

[2] M. Swan, Blockchain: Blueprint for a New Economy (Sebastopol, CA, USA: O’Reilly, 2015).

[3] N. Szabo, ”Formalizing and Securing Relationships on Public Networks, First Monday, 2, 9 (1997).

[4] See, for example, https://digix.global: a cryptocurrency backed by bars of gold in which the governance is mediated by a DAO, last accessed on 24th July 2019.

[5] See http://www.okhaos.com/plantoids/, last accessed on 24th July 2019.

[6] See https://p2pmodels.eu, last accessed on 2nd July 2019.

[7] Y. Benkler, The Wealth of Networks: How Social Production Transforms Markets and Freedom (2006); M. Bauwens, “The Political Economy of Peer Production,CTheory 1, 12 (2005).

[8] M. Fuster-Morell, J. L. Salcedo, and M. Berlinguer. “Debate About the Concept of Value in Commons-Based Peer Production,” Internet Science (2016); Bauwens, Michel, and Alekos Pantazis. 2018. “The Ecosystem of Commons-Based Peer Production and Its Transformative Dynamics.” The Sociological Review, 66, 2 (2016), 302–19.

[9] V. Kostakis and M. Papachristou, “Commons-Based Peer Production and Digital Fabrication: The Case of a RepRap-Based, Lego-Built 3D Printing-Milling Machine” (2013); V. Niaros, V. Kostakis, and W. Drechsler, “Making (in) the Smart City: The Emergence of Makerspaces,” Telematics and Informatics (2017).

[10] A. Arvidsson, A. Caliandro, A. Cossu, M. Deka, A. Gandini, V. Luise, and G. Anselm, “Commons Based Peer Production in the Information Economy,” P2PValue (2016).

[11] C. Cheshire, and J. Antin, “The Social Psychological Effects of Feedback on the Production of Internet Information Pools,” Journal of Computer-Mediated Communication, 13, 1 (2008).

[12] M. Fuster-Morell, J. L. Salcedo, and M. Berlinguer, “Debate About the Concept of Value in Commons-Based Peer Production,Internet Science (2016).

[13] See https://guifi.net, last accessed on 30th June 2019.

[14] R. Baig, R. Roca, F. Freitag, and L. Navarro, “Guifi.net, a Crowdsourced Network Infrastructure Held in Common,Computer Networks: The International Journal of Computer and Telecommunications Networking, 90 (2015).

[15] J. Sánchez, “Architecture for the Commons: Participatory Systems in the Age of Platforms,Architectural Design, 89, 2 (2019).

[16] R. Baig, R. Roca, F. Freitag, and L. Navarro. “Guifi.net, a Crowdsourced Network Infrastructure Held in Common,” Computer Networks: The International Journal of Computer and Telecommunications Networking, 90 (2015).

[17] Guifi.net. 2019. “Node Statistics,” Node Statistics Guifi.net (2019).

[18] Ibid.

[19] G. Hardin, “The Tragedy of the Commons. The Population Problem Has No Technical Solution; It Requires a Fundamental Extension in Morality,Science 162, 3859 (1968), 1243–48.

[20] E. Ostrom, Governing the Commons: The Evolution of Institutions for Collective Action (Cambridge University Press, 1990).

[21] Ibid.

[22] E. Ostrom, “Understanding Institutional Diversity” (2009); M. Cox, G. Arnold, and S. Villamayor Tomás, “A Review of Design Principles for Community-Based Natural Resource Management” (2010).

[23] E. Ostrom, Governing the Commons: The Evolution of Institutions for Collective Action (Cambridge University Press, 1990), 88–102.

[24] F. B. Viégas, M. Wattenberg, and M. M. McKeon, “The Hidden Order of Wikipedia,” Online Communities and Social Computing, OCSC'07: Proceedings of the 2nd international conference on Online communities and social computing (2007).

[25] D. Rozas, A. Tenorio-Fornés, S. Díaz-Molina, and S. Hassan, “When Ostrom Meets Blockchain: Exploring the Potentials of Blockchain for Commons Governance,SSRN Electronic Journal (2018), 8–20.

[26] S. Huckle and M. White, “Socialism and the Blockchain.” Future Internet, 8, 4 (2016), 49.

[27] P. De Filippi, and S. Hassan, “Blockchain Technology as a Regulatory Technology: From Code Is Law to Law Is Code,First Monday, 21, 12 (2016).

[28] D. Rozas, A. Tenorio-Fornés, S. Díaz-Molina, and S. Hassan, “When Ostrom Meets Blockchain: Exploring the Potentials of Blockchain for Commons Governance,” SSRN Electronic Journal (2018), 21–22.

[29] A.  Pazaitis, P. De Filippi, and V. Kostakis, “Blockchain and Value Systems in the Sharing Economy: The Illustrative Case of Backfeed,” Technological Forecasting and Social Change, 125 (2017), 105–15.

[30] R. Baig, R. Roca, F. Freitag, and L. Navarro. “Guifi.net, a Crowdsourced Network Infrastructure Held in Common,” Computer Networks: The International Journal of Computer and Telecommunications Networking, 90 (2015).

[31] Ibid.

[32] Ibid.

[33] See Baig et al. (2015) for further details.

[34] J. Sánchez, “Architecture for the Commons: Participatory Systems in the Age of Platforms,” Architectural Design, 89, 2 (2019).

[35] Ibid.

Suggest a Tag for this Article
View into a codividual interiority. Physical Model, Comata at the BPro Show 2019. Image: Comata, Anthony Alvidrez, Shivang Bansal, and Hao-Chen Huang, RC17, MArch Urban Design, The Bartlett School of Architecture, UCL, 2019.
Towards a Sympoietic Architecture: Codividual Sympoiesis as an Architectural Model
Architecture, Autopoesis, City Architecture, Computational Design, Mereologies, Mereology, Urban Design
Shivang Bansal
University College London
shivang.bansal.18@alumni.ucl.ac.uk
Add to Issue
Read Article: 4382 Words

“…the rigour of the architecture is concealed beneath the cunning arrangement of the disordered violences…”[1] 

This essay investigates the potential of codividual sympoiesis as a mode of thinking overlapping ecological concepts with economics, contemporary philosophy, advanced research in computation and digital architecture. By extending Donna Haraway’s argument of “tentacular thinking” into architecture, it lays emphasis on a self-organising and sympoietic approach to architecture. Shifting focus from an object-oriented thinking to parts, it uses mereology, the study of part-hoods and compositions, as a methodology to understand a building as being composed of parts. 

It argues the limits of autopoiesis as a system and conceptualises a new architectural computing system embracing spatial codividuality and sympoiesis as a necessity for an adaptive and networked existence through continued complex interactions among its components. It propagates codividual sympoiesis as a model for continuous discrete computation and automata, relevant in the present times of distributed and shared economies.

A notion of fusing parts is established to scale up the concept and to analyse the assemblages created over a steady sympoietic computational process, guided by mereology and the discrete model. It gives rise to new conceptions of space, with a multitude of situations offered by the system at any given instant. These sympoietic inter-relations between the parts can be used to steadily produce new relations and spatial knottings, going beyond the most limiting aspect of autopoiesis, enabling it to begin to produce similar patterns of relations.

Tentacular Thinking

This essay extends the conceptual idea of tentacular thinking,[2] propagated by Donna Haraway, into architecture. Tentacular thinking, as Haraway explains, is an ecological concept which is a metaphorical explanation for a nonlinear, multiple, networked existence. It elaborates on a biological idea that “we are not singular beings, but limbs in a complex, multi-species network of entwined ways of existing.” Haraway, being an ecological thinker, leads this notion of tentacular thinking to the idea of poiesis, which means the process of growth or creation and brings into discussion several ecological organisational concepts based on self-organisation and collective organisation, namely autopoiesis and sympoiesis. It propagates the notion that architecture can evolve and change within itself, be more sympoietic rather than autopoietic, and more connected and intertwined. 

With the advent of distributed and participatory technologies, tentacularity offers a completely new formal thinking, one in which there is a shift from the object and towards the autonomy of parts. This shift towards part-thinking leads to a problem about how a building can be understood not as a whole, but on the basis of the inter-relationships between its composing parts. It can be understood as a mereological shift from global compositions to part-hoods and fusions triggering compositions.

A departure from the more simplified whole-oriented thinking, tentacular thinking comes about as a new perspective, as an alternative to traditional ideologies and thinking processes. In the present economic and societal context, within a decentralised, autonomous and more transparent organisational framework, stakeholders function in a form that is akin to multiple players forming a cat’s cradle, a phenomenon which could be understood as being sympoietic. With increases in direct exchange, especially with the rise of blockchain and distributed platforms such as Airbnb, Uber, etc. in architecture, such participatory concepts push for new typologies and real estate models such as co-living and co-working spaces.

Fusion of Parts: Codividuality

In considering share-abilities and cooperative interactions between parts, the notions of a fusing part and a fused part emerge, giving rise to a multitude of possibilities spatially. Fusing parts fuse together to form a fused part which, at the same stage, behaves as another fusing part to perform more fusions with other fusing parts to form larger fused parts. The overlaps and the various assemblages of these parts gain relevance here, and this is what codividuality is all about.

As Haraway explains, it begins to matter “what relations relate relations.”[3] Codividual comes about as a spatial condition that offers cooperative, co-living, co-working, co-existing living conditions. In the mereological sense, codividuality is about how fusing parts can combine to form a fused part, which in turn, can combine to form a larger fused part and so on. Conceptually, it can be understood that codividuality looks into an alternative method for the forming and fusing of spatial parts, thereby evolving a fusion of collectivist and individualist ideologies. It evolves as a form of architecture that is created from the interactions and fusion of different types of spaces to create a more connected and integrated environment. It offers the opportunity to develop new computing systems within architecture, allowing architectural systems to organise with automaton logic and behave as a sympoietic system. It calls for a rethinking of automata and computation.

Figure 1 – Computational experiments in Tentacular Thinking. Image: Anthony Alvidrez, Shivang Bansal and Haochen Huang, RC17, MArch Urban Design, The Bartlett School of Architecture, UCL, 2019.

Codividual can be perceived as a spatial condition allowing for spatial connectivities and, in the mereological sense, as a part composed of parts; a part and its parts. What is crucial is the nature of the organisation of these parts. An understanding of the meaning and history of the organisational concepts of autopoiesis and sympoiesis brings out this nature.

Autopoiesis: Towards Assemblages of Parts

The concept of autopoiesis stems from biology. A neologism introduced by Humberto Maturana and Francisco Varela in 1980, autopoiesis highlights the self-producing nature of living systems. Maturana and Varela defined an autopoietic system as one that “continuously generates and specifies its own organisation through its operation as a system of production of its own components.”[4] A union of the Greek terms – autos, meaning “self” and, poiesis, meaning “organisation” – autopoiesis came about as an answer to questions cropping up in the biological sciences pertaining to the organisation of living organisms. Autopoiesis was an attempt to resolve the confusion between biological processes that depend on history such as evolution and ontogenesis, in contrast with those that are independent of history, like individual organisation. It questioned the organisations of living systems which made them a whole.

Varela et al pointed out autonomy as the characteristic phenomenon arising from an autopoietic organisation; one that is a product of a recursive operation.[5] They described an autopoietic organisation as a unity; as a system, with an inherently invariant organisation. Autopoietic organisation can be understood as a circular organisation; as a system that is self-referential and closed. Jerome McGann, on the basis of his interpretation of Varela et al, described an autopoietic system as a “closed topological space, continuously generating and specifying its own organisation through its operation as a system of production of its own components, doing it in an endless turnover of components”.[6]

What must be noted here is that the computational concept of self-reproducing automata is classically based on an understanding of a cell and its relation to the environment. This is akin to the conceptual premise of autopoiesis, which is the recursive interaction between the structure and its environment,[7] thus forming the system. It must be noted that both the concepts start with a biological understanding of systems and then extend the concept. A direct link can be observed between the works of von Neumann, and Maturana and Varela. Automata, therefore, can be seen as an autopoietic system. 

The sociologist, Niklas Luhmann, took forward this concept into the domain of social systems. His theoretical basis for the social systems theory is that all social events depend on systems of communication. On delving into the history of social or societal differentiation, Luhmann observes that the organisation of societies is based on functional differentiation. A “functionally differentiated society”, as he explains, comprises varying parallel functional systems that co-evolve as autonomous discourses. He discovers that each of these systems, through their own specific medium, evolve over time, following what Luhmann calls “self-descriptions”, bringing out a sense of autonomy in that respective system.[8] 

Following Maturana and Varela’s explanation, an autopoietic organisation may be viewed as a composite unity, where internal interactions form the boundary through preferential neighbourhood interactions, and not external forces. It is this attribute of self-referential closure that Luhmann adopts in his framework. This closure maintains the social systems within and against an environment, culminating in order out of chaos.

The Limits of Autopoietic Thinking

Beth Dempster, as a contradiction to Maturana and Varela’s proposition of autopoiesis, proposed a new concept for self-organising systems. She argues that heuristics based on the analogy of living systems are often incongruous and lead to misleading interpretations of complex systems. Besides, autopoietic systems tend to be homeostatic and are development oriented in their nature.[9] Being self-producing autonomous units “with self-defined spatial or temporal boundaries”,[10] autopoietic systems show a centralised control system and are consequently efficient. At the same time, such systems tend to develop patterns and become foreseeable. It is this development-oriented, predictable and bounded nature of autopoietic systems that poses a problem when such systems are scaled up. 

Autopoietic systems follow a dynamic process that allows them to continually reproduce a similar pattern of relations between their components. This is also true for the case of automata. Moreover, autopoietic systems produce their own boundaries. This is the most limiting aspect of these concepts.

Autopoietic systems do not instigate the autonomy of parts, as they evolve on a prescribed logic. Instead, a more interesting proposition is one in which the interacting parts instigate a kind of feedback mechanism within the parts, leading to a response that triggers another feedback mechanism, and so on. Mario Carpo’s argument that in the domain of the digital, every consumer can be a producer, and that the state of permanent interactive variability offers endless possibilities for aggregating the judgement of many,[11] becomes relevant at this juncture. What holds true in the context of autopoiesis is Carpo’s argument that fluctuations decrease only at an infinitely large scale, when the relations converge ideally into one design.

In the sympoietic context, however, this state of permanent interactive variability Carpo describes is an offer of the digital to incorporate endless externalised inputs.[12] The need for sympoiesis comes in here. Sympoiesis maintains a form of equilibrium or moderation all along, but also, at the same time, remains open to change. The permanent interactive variability not only offers a multitude of situations but also remains flexible.

Sympoiesis

The limits to autopoietic thinking is what forms the basis for Dempster’s argument. In contradistinction to autopoiesis, she proposes a new concept that theorises on an “interpretation of ecosystems”, which she calls sympoietic systems. Literally, sympoiesis means “collective creation or organisation”. A neologism introduced by Dempster, the term, sympoiesis, explains the nature of living systems. Etymologically, it stems out from the Ancient Greek terms “σύν (sún, “together” or “collective”)” and “ποίησις (poíesis, “creation, production”)”. As Dempster explains, these are “collectively producing systems, boundaryless systems.”[13]

Sympoietic systems are boundary-less systems set apart from the autopoietic by “collective, amorphous qualities”. Sympoietic systems do not follow a linear trajectory and do not have any particular state. They are homeorhetic, i.e., these systems are dynamical systems which return to a trajectory and not to a particular state.[14] Such systems are evolution-oriented in nature and have the potential for surprising change. As a result of the dynamic and complex interactions among components, these systems are capable of self-organisation. Sympoietic systems, as Donna Haraway points out, decentralise control and information”,[15] which gets distributed over the components.

Sympoiesis can be understood simply as an act of “making-with”.[16] The notion of sympoiesis gains importance in the context of ecological thinking. Donna Haraway points out that nothing or no system can reproduce or make itself, and therefore, nothing is really absolutely autopoietic or self-organising. Sympoiesis reflects the notion of “complex, dynamic, responsive, situated, historical systems.” As Haraway explains, “sympoesis enlarges and displaces autopoesis and all other self-forming and self-sustaining system fantasies.”[17]

Haraway describes sympoietic arrangements as “ecological assemblages”.[18] In the purview of architecture, sympoiesis brings out a notion of an assemblage that could be understood as an architectural assemblage growing over sympoietic arrangements. Though sympoiesis is an ecological concept, what begins to work in the context of architecture is that the parts don’t have to be strict and they aim to think plenty; they also have ethics and synergies among each other. In sympoietic systems, components strive to create synergies amongst them through a cooperation and a feedback mechanism. It is the linkages between the components that take centre stage in a sympoietic system, and not the boundaries. Extrapolating the notion of sympoiesis into the realm of architecture, these assemblages can be conceived in Haraway’s words as “poly-spatial knottings”, held together “contingently and dynamically” in “complex patternings”.[19] What become critical are the intersections or overlaps or the areas of contact between the parts.

Sympoietic systems strategically occupy a niche between allopoiesis and autopoiesis, the two concepts proposed by Maturana and Varela. The three systems are differentiated by various degrees of organisational closure. Maturana and Varela elaborate on a binary notion of organisationally open and closed systems. Sympoiesis, as Dempster explains steps in as a system that depends on external sources, but at the same time it limits these inputs in a “self-determined manner”. It is neither closed nor open; it is “organisationally ajar”.[20] However, these systems must be understood as only idealised sketches of particular scenarios. No system in reality must be expected to strictly adhere to these descriptions but rather lie on a continuum with the two idealised situations as its extremes. 

It is this argument that is critical. In the context of architecture and urban design, what potentially fits is a hybrid model that lies on the continuum of autopoiesis and sympoiesis. While autopoiesis can guide the arrangement or growth of the system at the macro level, sympoiesis must and should step in in order to trigger a feedback or a circular mechanism within the system to respond to externalities. What can be envisaged is therefore a system wherein the autopoietic power of a system constantly attempts to optimise the system towards forming a boundary, and simultaneously the sympoietic power of the system attempts to trigger the system for a more networked, decentralised growth and existence, and therefore, creates a situation where both the powers attempt to push the system towards an equilibrium.

Towards Poly-Spatial Knottings

In sympoiesis, parts do not precede parts. There is nothing like an initial situation or a final situation. Parts begin to make each other through “semiotic material involution out of the beings of previous such entanglements”[21] or fused situations. In order to define codividuality and to identify differences, an understanding of classifying precedents is important. The first move is a simple shift from an object-oriented thinking to a parts-oriented thinking. Buildings are classified as having a dividual, individual and codividual character from the point of view of structure, navigation and program. 

Codividual is a spatial condition that promotes shared spatial connections, internally or externally, essentially portraying parts composed of parts, which behave as one fused part or multiple fused parts. The fused situations fulfil the condition for codividuality as the groupings form a new inseparable part – one that is no longer understood as two parts, but as one part, which is open to fuse with another part.

Fused Compositions

Delving into architectural history, one can see very few attempts in the past by architects and urban designers towards spatial integration by sympoietic means. However a sympoietic drive can be seen in the works of the urban planner Sir Patrick Geddes. He was against the grid-iron plan for cities and practised an approach of “conservative surgery” which involved a detailed understanding of the existing physical, social and symbolic landscapes of a site. For instance, in the plan for the city of Tel Aviv in Israel (1925–1929), Geddes stitches together the various nodes of the existing town akin to assemblages to form urban situations like boulevards, thereby activating those nodes and the connecting paths.

Fumihiko Maki and Masato Oktaka also identify three broad collective forms, namely, compositional form, megastructures and group forms. Maki underscores the importance of linkages and emphasises the need for making “comprehensible links” between discrete elements in urban design. He further explains that the urban is made from a combination of discrete forms and articulated large forms and is therefore, a collective form and “linking and disclosing linkage (articulation of the large entity)”[22] are of primary importance in the making of the collective form. He classifies these linkages into operational categories on the basis of their performance between the interacting parts.

Building upon Maki’s and Ohtaka’s theory of “collective form”, it is useful to appreciate that the architecture of a building can be thought of as a separate entity, and consequently there is an “inadequacy of spatial language to make meaningful urban environment.”[23] Sympoiesis comes out through this notion of understanding the urban environment as an interactive fabric between the building and the context. Maki and Ohtaka also make an important comment that the evolution of architectural theory has been restricted to the building and describe collective forms as a concept which goes beyond the building. Collective forms can have a sympoietic or an autopoietic nature, which is determined by the organisational principles of the collective form. Sympoietic collective forms not only can go beyond the building, but also weave a fabric of interaction with the context. Although a number of modern cases of collective forms exist, most of the traditional examples of collective forms, however, have evolved into collective forms over time, albeit unintentionally.

Figure 2 – Sympoietic urban fusion in the Uffizi corridor by Giorgio Vasari. Image: Shivang Bansal, RC17, MArch Urban Design, The Bartlett School of Architecture, UCL, 2018-19.

The Corridor by Giorgio Vasari

An important case of an early endeavour in designing a collective form at an urban scale is Corridoio Vasariano by Giorgio Vasari in Florence, built in the year 1564. It can be understood as a spatial continuum that connects through the numerous important buildings or nodes within the city through a built corridor, resulting in a collective form. According to Michael Dennis, Vasari’s Corridor, in its absolute sense, is a Renaissance “insert” into the “fundamentally medieval fabric of central Florence”.[24]  As Dennis writes in The Uffizi: Museum as Urban Design (1980),

“…Each building has its own identity and internal logic but is also simultaneously a fragment of a larger urban organisation; thus each is both complete and incomplete. And though a
given building may be a type, it is always deformed, never a pure type. Neither pure object nor pure texture, it has characteristics of both – an ambiguous building that was, and still is, multifunctional…”[25]

Dennis’s description for the design of the Vasari’s Corridor brings out the notion of spatial fusion of buildings as parts. The Corridor succeeds as an urban insert and this is primarily for two reasons. At first, it maintains the existing conditions and is successful in acclimatising to the context it is placed in. Secondly, it simultaneously functions on several varying scales, from that of the individual using the Corridor to the larger scale of the fabric through which it passes. The Vasari’s Corridor is a sympoietic urban fusion – one that is a culmination of the effect of local conditions.

Stan Allen, in contrast to compositions, presents a completely inverted concept for urban agglomerations. His concept of field configurations reflects a bottom-up phenomena. In his view, the design must necessarily reflect the “complex and dynamic behaviours of architecture’s users”.[26] Through sympoiesis, the internal interaction of parts becomes decisive and they become relevant as they become the design drivers and the overall formation remains fluid and a result of the interactions between the internal parts.

Figure 3 – Poly-spatial knottings composed of parts. Image: Anthony Alvidrez, Shivang Bansal and Haochen Huang, RC17, MArch Urban Design, The Bartlett School of Architecture, UCL, 2019.

Towards a Sympoietic Architecture

Another important aspect that forms a basis for the sympoietic argument is the relevance of information in systems. While Maturana and Varela explain that information must be irrelevant to self-producing systems since it is an extrinsically defined quantity, Dempster lays great emphasis on the relevance of information in sympoietic systems. Her explanation on the relevance of information is that it potentially carries a message or a meaning for a recipient. Information, therefore, is dependent on the context and recipient, but Stafford Beer hints that it is also “observer dependent”.[27]

In the architectural domain, it signifies that information or external data input holds no relevance in an autopoietic system. The system grows purely on the basis of the encoded logic and part-to-part organisational relations, and is unrestricted and free from any possible input. However, information or data in the sympoietic paradigm gains relevance as it activates the system as a continuous flux of information guiding its organisation. This relates to the concepts of reinforced machine learning, wherein the system learns by heuristics to evolve by adapting to changing conditions, and by also producing new ones, albeit it comes with an inherent bias.

The Economic Offer of the Codividual

From an economic lens, the concept of sympoiesis does not exist at the moment. However, with the rise in participatory processes within the economy and the advent of blockchain, it shows immense potential in architecture. Elinor Ostrom’s work on the role of commons in decision-making influences the work of David Rozas, who researches on a model of blockchain-based commons governance. He envisages a system which is decentralised, autonomous, distributed and transparent, a more democratic system where each individual plays his/her own role.[28] This idea is about bringing a more sympoietic kind of drive to blockchain. Sympoietic systems are based on a model that is akin to a commons-oriented or a blockchain-based economy that functions like a cat’s cradle with its multiple stakeholders being interdependent on each other. And as Jose Sanchez points out, it is the power of the discrete, interdependent system that makes this architecture possible. According to him, it offers a “participatory framework for collective production”.[29]

Figure 4 – Fused parthoods over sympoietic interactions. Physical model Comata, Anthony Alvidrez, Shivang Bansal and Haochen Huang, RC17, MArch Urban Design, The Bartlett School of Architecture, UCL, 2019. Image: Rasa Navasaityte.

The fusion of parts leads to the creation of parts such that the sum of the parts becomes greater than the whole. A codividual sympoietic model can potentially resolve the housing crisis since it flips the economic model to a bottom-up approach. With tokenisation, autonomous automatisation, decentralisation of power and transparency, this blockchain-based codividual model can compete with traditional real estate models, thereby resulting in more equitable and fair-minded forms of housing. As Lohry and Bodell point out, such models can reduce personal risk and also make livelihoods more economical and “community-oriented”.[30] 

Conclusion

The ecological framework of the concept of poiesis, as already outlined, is based on the growth from the organisation of elements. In the context of autopoiesis and sympoiesis, it can be observed that “part-to-part” and even “part-to-whole” conditions gain significant relevance in these concepts. An appreciation of these conditions, therefore, becomes relevant to understand these kinds of notions. The idea of components, as described by Dempster and Haraway in the purview of sympoiesis, and Jerome McGann in the autopoietic context, could be extended to architecture in the form of part-thinking.

However, a mereological approach begins with existing entities or “sympoietic interactions” and proceeds further with a description of their clusters, groupings and collectives. Through codividual sympoiesis, the whole gets distributed all over the parts.[31] In this system, the discreteness of parts is never just discrete. It goes beyond the participating entities and the environment. In line with Daniel Koehler’s argument, the autonomy of the part ceases to be defined just as a self-contained object. It goes beyond it and begins to be defined “around a ratio of a reality, a point of view, a filter or a perspective”[32].

Sympoiesis evolves out of competitive or cooperative interactions of parts. As in ecology, these parts play symbionts to each other, in diverse kinds of relationalities and with varying degrees of openness to attachments and assemblages with other fusing parts depending on the number of embedded brains and the potential connectors. Traditionally, architecture is parasitic. When the aesthetic or the overall form drives the architecture, architectural elements act as a host for other architectural elements to attach to depending on composition. In sympoiesis, there is no host and no parasite. It inverts the ideology of modernism, beginning with not a composition but actually evolving a composition of “webbed patterns of situated and dynamic dilemmas” over symbiotic interaction. Furthermore, increasingly complex levels of quasi-individuality of parts come out of this process of codividual sympoiesis. It gives an outlook of a collective and still retains the identity of the individual. It can simply be called multi-species architecture or becoming-with architecture.

Figure 5 – Sympoietic Assemblages of Parts. Physical model Comata, Anthony Alvidrez, Shivang Bansal and Haochen Huang, RC17, MArch Urban Design, The Bartlett School of Architecture, UCL, 2019. Image: Rasa Navasaityte.

Talking of transdisciplinary ecologies and architecture, we can foresee string figures tying together human and nonhuman ecologies, architecture, technologies, sustainability, and more. This also gives rise to a notion of ecological fusion of spatial conditions such as daylight and ventilation, in addition to physical fusion of parts. Codividual sympoiesis, thus, even shows potential for a nested codividual situation, in that the parts sympoietically fuse over different spatial functions.

Going over sympoiesis and mereology, it makes sense to look for parts which fuse to evolve fused parts; to look for architecture through which architecture is evolved; to look for a codividuality with which another codividuality is evolved. From a mereological point of view, in a system in which the external condition overlaps with an internal part in the search for another component, to give rise to a new spatial condition over the fusion of parts could be understood as codividual sympoiesis. Codividual sympoiesis is therefore about computing a polyphony, and not orchestrating a cacophony.

References

[1] M. Foucault, Madness and Civilization (New York: Random House US, 1980).

[2] D. Haraway, Staying with the Trouble: Making Kin in the Chthulucene (Durham: Duke University Press,  2016), 30–57.

[3] Ibid, 35.

[4] H. R. Maturana and F. G. Varela, Autopoiesis And Cognition (Dordrecht, Holland: D. Reidel Pub. Co., 1980).

[5] H. R. Maturana, F. G. Varela, and R. Uribe, "Autopoiesis: The Organization Of Living Systems, Its Characterization And A Model," Biosystems, 5, 4, (1974), 187–196.

[6] J. McGann, A New Republic of Letters (Cambridge, Massaschusetts: Harvard University Press, 2014).

[7] A. W. Burks, Von Neumann's Self-Reproducing Automata; Technical Report (Ann Arbor: The University of Michigan, 1969).

[8] N. Luhmann, Art as a Social System (Stanford: Stanford University Press, 2000), 232.

[9] B. Dempster, Sympoietic and Autopoietic Systems : A new distinction for self-organizing systems (Waterloo: School of Planning, University of Waterloo, 1998).

[10] Ibid, 9.

[11] M. Carpo, The Second Digital Turn: Design Beyond Intelligence (Cambridge, Massachusetts: MIT Press, 2017), 131–44.

[12] Ibid, 12.

[13] B. Dempster, Sympoietic and Autopoietic Systems : A new distinction for self-organizing systems (Waterloo: School of Planning, University of Waterloo, 1998).

[14] Ibid.

[15] D. Haraway, Staying with the Trouble: Making Kin in the Chthulucene (Durham: Duke University Press,  2016), 33.

[16] Ibid, 5.

[17] Ibid, 125.

[18] Ibid, 58.

[19] Ibid, 60.

[20] B. Dempster, Sympoietic and Autopoietic Systems : A new distinction for self-organizing systems (Waterloo: School of Planning, University of Waterloo, 1998).

[21] D. Haraway, Staying with the Trouble: Making Kin in the Chthulucene (Durham: Duke University Press, 2016), 60.

[22] F. Maki, and M. Ohtaka, Investigations in Collective Form (St. Louis: School of Architecture, Washington University, 1964), 3-17.

[23] Ibid.

[24] M. Dennis, "The Uffizi: Museum As Urban Design", Perspecta, 16, 62 (1980), 72.

[25] Ibid, 63.

[26] S. Allen, "From Object to Field,” Architectural Design, AD 67, 5-6 (1997), 24–31.

[27] S. Beer, “Preface,” Autopoiesis: The Organization of the Living, auth. H. R. Maturana and F. Varela (Dordrecht, Holland: D. Reidel Publishing Company, 1980).

[28] D. Rozas, “When Ostrom Meets Blockchain: Exploring the Potentials of Blockchain for Commons Governance” (2019), https://davidrozas.cc/presentations/when-ostrom-meets-blockchain-exploring-potentials-blockchain-commons-governance-1, last accessed 3 May 2019.

[29] J. Sánchez, “Architecture for the Commons: Participatory Systems in the Age of Platforms,” Architectural Design, 89, 2 (2019), 22–29.

[30] M. Lohry and B. Bodell, "Blockchain Enabled Co-Housing" (2015), https://medium.com/@MatthewLohry/blockchain-enabled-co-housing-de48e4f2b441, last accessed 3 May 2019.

[31] D. Koehler, “Mereological Thinking: Figuring Realities within Urban Form,” Architectural Design, 89, 2 (2019), 30–37.

[32] Ibid.

Suggest a Tag for this Article
TARSS. Image: Ziming He, Living Architecture Lab, RC3, MArch Architectural Design, The Bartlett School of Architecture, UCL, 2018.
The Ultimate Parts: A Mereological Approach of Form Under the Notion of Object-Oriented Ontology
Architecture, Architecture Theory, City Architecture, Form, Mereologies, Mereology, Urban Design
Ziming He
University College London
ucqbzm1@ucl.ac.uk
Add to Issue
Read Article: 3666 Words

Mereology is a formal concept which enters architecture as an additional formal category. Form is a rather ambiguous concept in architecture. So in this essay, first an investigation is conducted by contrasting two closely related concepts: shape and content.

Hans Trusack criticises the problem of shape for its shallow formalism and historical-theoretical indifference as a defensive strategy that evades the disciplines and difficulties of past and future.[1] The distinction between the terms “form” and “shape”, following Tursack’s argument, is a “matter of generative process”. Both terms point to the production of visual expression. Yet while shape refers to the appearance of an object, form reflects the logic of transformation and operation within historical and theoretical contexts such as political and religious ideology, economics and technological background. Tursack criticised the strategy of shape in architecture, stating its lack of reference, it being “plainly, and painfully, evident”,[2] and incapable of moving forward. Whereas form is difficult, disciplinary and requires historical and theoretical study, and yet promises the future. 

Form has the advantage of being able to deal with complex relations due to its deep and continuously evolving intervention with content. The term form derives from the Latin word forma, is understood as the combination of two Greek words: eidos, the conceptual form, and morphe, the physical form. The complexity of form can be attributed to these differentiated meanings, yet complexity is compatible with agencies and relations. This can emerge further by conducting a brief historical review.

Ancient Greek architecture pursues the ideality in mathematics and proportion. The efforts made by architects in designing the Parthenon provides evidence of this feature. These operations tried to approximate the physical shape of architecture to the “ideal” form. Form reflects the pursuit of ideality and perfection in this period. 

For Gothic architecture, there were more concerns about structure, and matter was pushed to its maximum capability to build as tall as possible for religious appeal. Consequently, structures were designed to be rigid and lightweight, and solid walls were replaced by glass windows, while flying buttresses supported the main structure to grow even taller. Consequently, astonishing space and fascinating transparency emerged.

Modernism claims that “form follows function”,[3] rejecting traditional architecture styles. The reality of matter and the logic of technology eschewed decorations, proportions, or any subjective distortion of matter. The emphasis on the term “function” illustrates an ideology of treating architecture as a machine. Each part is nothing more than a component that has a certain feature inside this machine, and redundant decorations and details are removed to deliver this idea clearly. Without distractions, space becomes evident.

In the shift to postmodernism, the uniformity and the lack of variety of modernist architectures were reacted against, and a great variety of approaches emerged to overcome the shortcomings of modernism. Parametricism, for instance, has been promoted by the thriving of digital technologies. Designers are capable of more complex formal production, and architectural elements have become variables that can be interdependently manipulated. In this formalism, rigidity, isolation, and separation are opposed, while softness, malleability, differentiation and continuity are praised.

From the examples above, form is the embodiment of the relations between architecture and its motive in specific historical scenarios, while for shape, only the results are accounted for – relations are ignored, and architecture is treated as isolated physical entities, incapable of producing new relations. Different methodologies of dealing with architectural form also imply the variation of ideology in compiling form with content.

Mereology – An Approach of Architectural Form

In recent philosophical texts, a third notion of form is brought forward. Contrary to a dialectic of form and content, here investigations deal with the resonance of parts: the description of objects by their ontological entanglement only. The writings of the philosopher Tristan Garcia are a strong example for such mereological considerations. In his treatise Form and Object: A Treatise on Things (2014), Garcia investigates the ontology of objects with two initial questions, “… what is everything compose of? … what do all things compose?”[4] The first question interrogates the internal, the elementary component of everything. The second interrogates the external, the totality of everything. For Garcia, the form of a thing is “the absence of the thing, its opposite, its very condition,”[5] form has two senses, the “beginning”, and the “end”, which never ends. Form begins when a thing ends, it begins with different forms; in the end, since it has “endless end”, form ultimately merges into one, which is “the world”. Garcia defines an object as “a thing limited by other things and conditioned by one or several things.”[6] The form of an object depends on what comprehends or limits this object. Every object is “embedded in a membership relation with one or several things”,[7] they can be divided by defining limits, which is also a thing distinguishing one thing from another. Garcia’s argument adapts the concept of mereology. Form has two extremes, one toward the fundamental element of matter, and the other toward the world, comprehending everything. All things can always be divided into an infinite number of parts, and they can always be parts of another thing. Identifying parts or wholes within a section we can operate on can establish a limit. The relevance between form and mereology opens a new opportunity to inspect architectural form from a different point of view.

One of the first discussions about parts and wholes in modern philosophy was posed by Edmund Husserl, in Logical Investigation (1st ed. 1900-1901, 2nd ed, 1913),[8] but the term “mereology” has not been put forward until Stanisław Leśniewski used it in 1927 from the Greek work méros (parts).[9] Mereology is considered as an alternative to set theory. A crucial distinction lies between mereology and set theory in that set theory concerns the relations between a class and its elements, while mereology describes the relations between entities. The mathematical axioms of mereology will be used as the fundamental theory of developing the method of analysing architectural form.

Figure 1 – Diagrams for Mereological Relation in Mathematics, Ziming He, 2019. Image credit: Living Architecture Lab, RC3, MArch Architectural Design, The Bartlett School of Architecture, UCL, 2019.

Following Roberto Casati and Achim Varzi, the four fundamental mathematical formularisations of mereology are: “Relations are reflexive, antisymmetric and transitive. (…) First, everything is part of itself. Second, two different objects cannot be part of each other. Third, each part of a part of a whole is also part of that whole. Fourth, an object can be a part of another object, if both exist.”[10] 

Mereology can be a promising approach also for the reading of architectural form, as it emphasises relationships without reducing buildings to their appearance or function. However, such philosophical descriptions consider wholes and parts as mostly abstract figures. Therefore, a supplement could be developed to properly categorise the mereological relations in the field of architecture. Having the relations between form and mereology addressed, methodologies can be developed to access the analysis of architectural form. Mereology as a specific methodology for architecture is quite new. One of the first introductions can be found in Daniel Koehler’s book The Mereological City: A Reading of the Works of Ludwig Hilberseimer (2016). Here, Koehler departs from the modern city, exemplified through the work of Ludwig Hilberseimer to illustrate mereological relations in the modernist city. From the room to the house to the city to the region, Hilberseimer canonically drew the city as a hierarchical, nested stack of cellular spaces.[11] However, through the close reading of its mereological relations it becomes clear that political, economic or social conditions are entangled in a circular composition between the parts of the city. Recalling Garcia’s discourse, and resonating with Leon Battista Alberti’s thesis, Koehler shows that the cells in Hilberseimer’s modernist city are interlocked. A house becomes the whole for rooms; a city becomes the whole for houses. By considering the city and its individual buildings equally, “the whole is a part for the part as a whole.”[12]

Architectural Relations Between Parts and Wholes

Parts are not only grouped, packed and nested through different scales, but also in different relations. Specific relationships have been developed in different architectural epochs and styles. Mathematically, four general classes of relations can be drawn: whole-to-whole, part-to-part, whole-to-parts and parts-to-whole, while more specific subclasses can be discovered from each. 

According to the mathematical definition, between wholes there exist complex relations, the whole could exist on any mereological level, and the complexity of relations between multiple levels are also accounted for. Whole-to-whole relations can become complex when considering multi-layer interaction, and more relations can be identified: juxtapose, overlap, contain, undercrossing, transitivity, partition, trans-boundary, intact juxtapose, compromised juxtapose.

Figure 2 – Whole-to-whole relations. Image credit: Ziming He, Living Architecture Lab, RC3, MArch Architectural Design, The Bartlett School of Architecture, UCL, 2018.

A first glance of New York, gives the impression that it is quite heterogeneous, but underneath there is a city grid underlying the heterogeneity, and while the relations displayed in the grid are rather simple, all wholes juxtapose with one another. In comparison, in Siena, an Italian city, the urban space is quite complex, where boundaries of all wholes negotiate with others, the gaps in between are carefully treated, the nesting relations are extremely rich, and multiple relations from the diagram above can be found.

Figure 3 – New York. Image: Jonathan Riley.
Figure 4 – Siena. Image: Cristina Gottardi.

The whole-to-parts relation studies what the whole does to its part, namely in terms of top-down rules. The mathematical definition does not involve specific situations that a whole-part condition holds. Distinctions within individual contexts make a significant difference in clarifying an explicit relation. The situations for the whole can generally be classified into following types: fuse, fit and combine.

Figure 5 – Whole-to-part relations. Image: Ziming He, Living Architecture Lab, RC3, MArch Architectural Design, The Bartlett School of Architecture, UCL, 2018.

One of Zaha Hadid’s projects, Heydar Aliyev Centre, indicates the fusing relation. Architecture is represented as a smooth, fluid volume. The distinction between elements disappears, and this dominating power even extends to the external landscape. In order to maintain a continuous whole, parts are fabricated into a particular shape, having their unique unchangeable locations. The continuous whole excessively overwhelms the parts, yet not all parts are reshaped to fuse into the whole, and because the parts are small enough in relationship to the whole, the control from the whole is weakened, and parts are fit into the whole.

The third type is combining. An example for this relation is Palladio’s project Villa Rotonda. In this case, parts are obvious. The whole is a composition of the parts’ identities. However, the whole also holds a strong framework, in a rigorous geometric rule that decides positions and characters of parts. The arrangement of parts is the embodiment of this framework. 

Figure 5 – Heydar Aliyev Centre, designed by Zaha Hadid Architects. Image: Orxan Musayev.
Figure 6 – Diagram of fitting relation. Image: Ziming He, Living Architecture Lab, RC3, MArch Architectural Design, The Bartlett School of Architecture, UCL, 2018.
Figure 7 – Façade of Villa Rotonda. Image: Ziming He, Living Architecture Lab, RC3, MArch Architectural Design, The Bartlett School of Architecture, UCL, 2018.

The parts-to-whole relation studies what the parts do to the whole, or the power of bottom-up relationships. The different situations of parts are also key parameters in validating a given relation. The classification of situations for parts are as follows: frame, intrinsic frame, extrinsic frame, bounded alliance, unbounded alliance.

Figure 8 – Part-to-whole relations. Image: Ziming He, Living Architecture Lab, RC3, MArch Architectural Design, The Bartlett School of Architecture, UCL, 2018.

Emil Kaufmann thoroughly investigated the innovative works by Claude Nicholas Ledoux in Three Revolutionary Architects: Boullee, Ledoux and Lequeu (1952).[13] According to Kaufmann’s study, Ledoux’s works developed new compositional relations of elements from the Baroque. The characteristics of parts in Baroque architecture are rich, but tend to regulate the identities of all the elementary parts and fuse them together to serve the harmony of the whole, presenting the intrinsic framing. Ledoux’s work is an extrinsic framing, where the parts are relatively independent, with each element maintaining its own properties, and while consisting of the whole, they can be replaced with other identical components.

One of my projects in discrete aggregation of elements presents an unbounded alliance relation. The aggregation as a whole shows a form that is discretised (Figure 12), and does not pass any top-down instructions to its parts.

Figure 9 – Facade of Church of the Gesù. Image: Ziming He, Living Architecture Lab, RC3, MArch Architectural Design, The Bartlett School of Architecture, UCL, 2018.
Figure 10 – Façade of Château de Mauperthuis. Image: Ziming He, Living Architecture Lab, RC3, MArch Architectural Design, The Bartlett School of Architecture, UCL, 2018.

Figure 11 – Discrete aggregation. Image: Ziming He, Living Architecture Lab, RC3, MArch Architectural Design, The Bartlett School of Architecture, UCL, 2018.

Part-to-Part Without Whole – The Ultimate Parts

For part-to-part relations, local interactions are emphasised, and interactions occur at multiple levels of compositions, where the part-to-part relations in some cases are similar to that between wholes. It has following classifications: juxtapose, interrelate, contain, partition, overlap, trans-juxtapose, over-juxtapose, over-partition, over-overlap.

Figure 12 – Part-to-part relation. Image: Ziming He, Living Architecture Lab, RC3, MArch Architectural Design, The Bartlett School of Architecture, UCL, 2018.

Architects have been working on the possibility of removing the whole by studying the part-to-part relations. Several approaches have been developed, mainly through computation. Neil Leach considers the city as a “swarm intelligence”,[14] bringing forward the potential of developing urban form with computational method. Leach encourages swarm intelligence for the interactions between agents (parts), which “offers behavioral translations of topology and geometry”,[15] while fractals, L-systems or cellular automata are all constrained by some limitation. However, although swarm intelligence is based on the interaction of individual agents, it is always treated as a whole; all cells of CA are fixed in the background grid, which is also a whole. For fractals and L-systems, they can be subdivided into infinite parts, a transcendent whole where all parts grown from still exist. In the mereological sense, none of these cases can escape the shadow of the whole – strictly speaking, they are part-to-whole relations. To discuss the part-to-part relation in more depth, more investigation is needed to clarify the concept of part.

In The Democracy of Objects (2011), Levi Bryant claims that objects constitute a larger object by establishing relations with others, but this doesn’t alter the existence of objects, as he says, “all objects equally exist, but not all objects exist equally.” In Bryant’s discourse, this independence suggests the dissolution of the whole. Bryant proposes a concept of “regimes of attraction”, that includes the “endo-relation” and the “exo-relation”. The endo-relation indicates that the proper being of an object consists of its powers or what an object can do”, not the “qualities” emerging within an exo-relation. An object possesses “volcanic powers”, the stabilisation of the regime of attraction actualises it into a specific state.[16] The concept of the whole reduces objects to this state, which displays only a section of their proper beings. The concept of regimes of attraction is against this reduction.

The regime of attraction can be linked to the notion of “assemblage” from Manuel DeLanda, however, there is a distinction between the two. Assemblage holds only the relation of exteriority, whereas regime of attraction maintains both relations of interiority and exteriority. In Assemblage Theory (2016), DeLanda reassembled the concept “assemblage”, which was originated from the French agencement. Created by Gilles Deleuze and Félix Guattari, this original term refers to the following meanings: the “action of matching or fitting together a set of components” – the process, and the “result of such an action” – the product. 

DeLanda emphasised two aspects, heterogeneity and relations. As he indicated, the “contrast between filiations and alliances”[17] can be described in other words as intrinsic and extrinsic relations. 

The nature of these relations has different influences on the components. The intrinsic relation tends to define the identities of all the parts and fix them into exact location, while the extrinsic relation connects the parts in exteriority – without interfering with their identities. DeLanda summarised four characteristics of assemblage: 1) individuality, an assemblage is an individual entity, despite different scale or different number of components; 2) heterogeneity, components of an assemblage are always heterogeneous; 3) composable, assemblages can be composed into another assemblage; 4) bilateral-interactivity, an assemblage emerges from parts interactions, it also passes influences on parts.[18]

DeLanda then moved on to the two parameters of assemblage. The first parameter is directed toward the whole, the “degree of territorialisation and deterritorialisation”, meaning how much the whole “homogenises” its component parts. The second parameter is directed toward the parts, the “degree of coding and decoding”, meaning how much the identities of parts are fixed by the rules of the whole. The concept of assemblage provides us a new lens of investigating these mereological relations. With this model, the heterogeneities and particularity of parts are fully respected. The wholes become immanent, individual entities, existing “alongside the parts in the same ontological plane”,[19] while parts in a whole are included in the whole but not belonging to it, and according to Bryant’s discourse, the absence of belonging dispelled the existence of the whole.[20]

From the study of regime of attraction and assemblage, this essay proposes a new concept – “the ultimate parts” – in which a proper “part-to-part without whole” is embedded. A part (P) horizontally interacts with its neighbouring parts (Pn), with parts of neighbouring parts (Pnp), as well as interacting downwardly with parts that compose it (Pp) and upwardly with wholes it is constituting which are also parts (Pw). This concept significantly increases the initiatives of parts and decreases the limitations and reductions of them. It doesn’t deny the utilities of the whole, but considers the whole as another independent entity, another part. It’s neither top-down, nor bottom-up, but projects all relations from a hierarchical structure to a comprehensive flattened structure. The ultimate parts concept provides a new perspective for observing relations between objects from a higher dimension.

Figure 13 – Diagram of “The Ultimate Parts”. Image: Ziming He, Living Architecture Lab, RC3, MArch Architectural Design, The Bartlett School of Architecture, UCL, 2018.

One application of this concept is TARSS (Tensegrity Adaptive Robotic Structure System), my research project in MArch Architectural Design in B-Pro at The Bartlett School of Architecture in 2017–2018. This project utilises the features of tensegrity structures of rigidity, flexibility and lightweight. The difference is that rather than fixing parts into a static posture and eliminating their movements, the project contrarily tries to increase the fre