Search
Mailing List
Back to Top
Issue 32 G
06/08/2022
ISSN 2634-8578
Curated By:
Francesca Coman
-
Aesthetics, Affect Theory, AI Diaries
Add to Basket
Share →
Contents:
Various scenes recreated from Svalbard used as augmented reality triggers in playing out multi-linear narratives of current and projected scenarios.
Various scenes recreated from Svalbard, used as augmented reality triggers in playing out multi-linear narratives of current and projected scenarios.
Introduction to Issue 03: Climate F(r)ictions
03/08/2022
Climate F(r)ictions, curator's note
Deborah Lopez Lobato, Haden Charbel

d.lobato@ucl.ac.uk
Add to Issue
Read Article: 737 Words

The effects of climate change have become increasingly apparent, with implications across multiple geographical scales and regions. Read as ecological and environmental transformations, accelerated transitional states are unfolding consequences and prompting responses within social, political, economic, human and non-human spheres alike. For instance, the term “cli-migration” was coined by an Alaskan human rights lawyer in 2008 to describe the permanent forced relocation of communities due to climate change. That same year, Ecuador introduced articles 10 and 71-74 to their constitution that explain the “Rights of Nature” as both a definition and the means to its legal and practical application. 

While climate change can be described as a “hyper-object” whose effects are generally conceived to exist at a scale that far surpasses one’s capacity to grasp it, its causes are grounded in the accumulation of various actions that are linked with the extractivist and capitalist logics resulting in a positive feedback loop – more resource extraction leads to more consumption and vice versa. Architecture is indeed one facet among an ecosystem of production- and consumer-based economies that has inextricably linked resources to commodities. Further to this, the use of territorialising technologies and mediums (such as satellite imagery and land surveys) is now coupled with artificial intelligence such as machine learning, optimisation algorithms and sensory devices, increasing the efficiency of all aspects of the supply chain; from prospecting, to extraction, and transport. It would seem that technology’s inevitable end is towards colonisation.  

This, however, has in turn drawn the attention of some to investigate alternative modes of land and resource management, such as Traditional Ecological Knowledge (TEK), which offer perspectives and methods based on indigenous groups’ locally developed practices. Meanwhile, contemporary trends in circular economies have begun questioning and testing the viability of re-utilising materials and rethinking logistical processes. Parallel to this, relatively recent technological trends that are predicated on decentralised protocols such as blockchain inherently possess political ideologies whilst exhibiting practical implications. Although technology tends to be presented as generic, the aforementioned hints at the possibility, and perhaps the inevitability, of interlacing and encoding ethics.  

Can technologies be designed and utilised without falling into territorialising tropes? Can AI be used to challenge current production-based economies? What are ways of subverting existing power structures? What decisions would nature make if it could govern itself? What kinds of technologies, protocols and policies can afford such autonomy? How would this affect architectural production, design and habitation, at individual, urban and larger ecological scales?  

This issue aims to put in dialogue the works and thoughts of different practitioners and researchers which, while distinct, share proximities when read through the lens of our current climate regime.

The Contributions 

Departing from the classical notion of landscape and wilderness, Marantha Dawkins and Bradley Cantrell reframe the Earth’s future through the promise and limitations of data and turn to embracing and actively engaging with uncertainties through Earth’s increasing unpredictability.  

On the notion of data, Catherine Griffiths explores the critical notion of “data situatedness”, removing it from its once neutral state of information and instead exploring from how and where we see data, as much as from how and where data sees.  

Moving into the ground, Andrew Toland revisits the epistemological underpinnings of “land” and the consequential perceptions of it; weaving a thread through social, legal and design practices, uncovering precedent limitations and strides, hinting that the extent of nature’s rights could be found a little deeper. 

Between the digital and the material, Theo Dounas explores the practical and ecological implications of blockchain technology in architecture, reconsidering design not from the perspective of a building, but rather a non-extractive and circular economy. 

Turning to the virtual, Damjan Jovanovic questions new modes of imaging through worldmaking, whereby games and simulations offer the possibility of interacting across multiple scales through dynamic and complex systems.  

Questioning how our futures might unfold, what might inhabit them, and how they might be experienced, Andrew Witt creates an observatory; a place hosting possible realities from the scale of newly evolved plant and animal life up to the scale of the Earth as a geo-dynamic system. 

The remaining contributions extend this constellation, some taking position through theoretical frameworks, and others as projective projects. 

Climate F(r)ictions proposes a turn away from dichotomies and binary thinking, and instead straddles the lines of our realities and imaginations, interconnecting technologies, ecologies, law and worlds, giving multi-scalar agency to humans and non-humans alike – it operates in the speculative realms of the plausible and the probable. 

Suggest a Tag for this Article
Figure 1: Blender File on particle generation (IPFS hash : QmSCGBzHoeBYwSyHZeBVRN Pc3f3T5LkLaEq75AnynFkf6f).
Figure 1: Blender File on particle generation (IPFS hash : QmSCGBzHoeBYwSyHZeBVRN Pc3f3T5LkLaEq75AnynFkf6f).
Crypto: towards a New Political Economy in Architecture 
Blockchain, Crypto, Cryptography, Deconstruction, Odysseus, peer economies, Political Economy
Theodore Dounas

t.dounas@rgu.ac.uk
Add to Issue
Read Article: 4220 Words

The paper presents a “primitives” approach to understanding the computational design enabled by blockchain technologies, as a new political economy for the architecture discipline. The paper’s motivation lies in exploring the challenges that exist for architects to understand blockchain, evidenced through the author’s multiple prototypes,[1,2,3,4] discussions, workshops and code writing with students and colleagues, but also in the fragmentation of the Architecture-Engineering-Construction (AEC) industry and the impermanence that computational design enhances in architecture.[5] These challenges, while situated within the confines of the discipline of computational design and architecture, are defined and affected by the challenges that exist within the wider AEC industry and its extractive relationship with the physical environment.  

Methodologically the paper is a philosophical and semantic exploration on the meaning of architecture in a decentralised context, considering its uncoupled nature with signs and design, and it sets a direction in which architectural practice needs to move, changing from an extractive to a non-extractive or circular nature. 

Blockchain: peer economies, trust and immutability, transparency, incentives for participation, and entropy 

A blockchain is a distributed computer network, where each computer node holds a copy of a distributed ledger that holds values.[6] Computationally, a Blockchain acts as both a state machine able to execute smart contracts,[7] i.e., software code that is the equivalent of an automatic vending machine, but also a continuous, immutable chain, built out of discrete blocks of information, each of which contains a cryptographic hash of the previous discrete block. Each block contains a series of transactions or changes to the distributed ledger, which in the discipline of architectural design can be a series of design synthetical actions, executed in a bottom-up fashion, and encoded into a block. Within a regular time interval, the blockchain network, though an incentivised participation system, selects the next block to be written to the ledger/chain. Due to the their nature, public, permissionless blockchains act as a medium of trust (trust machines) between agents that are not necessarily in concert or known to one another; are resilient in the sense that losing a large part of the network does not destroy the blockchain; are immutable because one cannot go back and delete information as by design block cryptographic hashes are embedded into the next one creating an immutable chain; and operate through cryptoeconomic incentives, i.e., economic mechanisms that incentivise, not always monetarily, behaviour that maintains or improves the system itself. Economically, a blockchain is a decentralised trust-machine that enables the creation of peer-to-peer economies via smart contracts, tokens and their computer protocols.[8] 

The first blockchain, the one invented in the bitcoin whitepaper,[9] has been designed as a replacement for centrally managed financial institutions. As such, blockchains, when pubic and permissionless, act as a medium of de-centralisation, i.e., a channel within which to engage with, where one does not need permission or approval beyond the limits and rules of the computer code that runs the blockchain.  

Blockchains encompass cryptography and its semantic discipline, immutability and entropy of information, continuity but also discreteness of information, and trust. Due to their decentralised nature, there is little room to understand blockchains as having affinity with architecture, the act of designing and building. In the following similes, however, I develop the parallels between architecture and blockchain, employing ideas from western and eastern literature. 

Applications that have promise within the blockchain space and that are distinctive compared to other similar or competing automation technologies are the creation of tokens, both fungible and non-fungible [10, 11] the formation of Decentralised Autonomous Organisations i.e., organisations that operate through the blockchain medium, and applications of decentralised finance. All these are built through the smart contracts, along with additional layers for interfaces and connectors between the blockchain and its external environment. Since the blockchain is an immutable record, it becomes even more important to ensure that data that passes and gets recorded on the blockchain is of a high quality or truthfulness. To ensure this takes place, the concept of an oracle is introduced. Oracles are trustworthy entities, operating in the exterior of a blockchain, made trustworthy through both incentivisation and disincentives, with the responsibility to feed data into blockchains. Parallel to blockchains, though, remain distributed filesystems, used for storing files, rather than data, in a decentralised manner. One such filesystem is the Interplanetary filesystem,[12] which operates via content rather than addressing: within IPFS we are looking for “what” rather than “where” as we do within the world wide web. Content on IPFS is also cryptographically signed with a cryptographic hash that makes the content unique and allows it to be found. For example, the following file from Blender has the IPFS hash:

Figure 1: Blender File on particle generation (IPFS hash : QmSCGBzHoeBYwSyHZeBVRN Pc3f3T5LkLaEq75AnynFkf6f).

Architecture as Cryptography 

Odysseus 

To explore the idea of blockchain as an infrastructure layer for architectural design, we will introduce Odysseus (Ulysses),[13] a much discussed hero and anti-hero of many turns or tricks (polytropos),[14] as his myth as a craftsman is solidified by architecture in the closing narration of The Odyssey. Inventiveness and the particular craft skills attributed to the character are compelling reasons to use him as a vehicle for creating parallels between blockchain and architectural design. 

Odysseus participated in the Trojan Wars, and was the key hero responsible for the Trojan Horse and the demise of Troy. His quest for “Nostos”, i.e. returning home, is documented in the second Homerian epic, Odyssey. The Odyssey describes the voyage of Odysseus to Ithaca, after the Troy war, where his ship and crew pass through a multitude of trials and challenges imposed by Poseidon, in a voyage that takes about 10 years. His crew and ship get lost but he is saved, and manages to return to the island of Ithaca.[13,14] Upon his return, he must face a final challenge. 

The olive tree bed 

During his absence of more than 20 years, his wife Penelope has been under pressure by the local aristocracy to re-marry, as Odysseus is considered lost at sea. Local aristocrats have converged at the palace and are in competition to marry Penelope. She has prudently deflected the pressure by saying that she will chose one of the aristocrats, the “Mnesteres”, after she finishes her textile weaving – which she delays by weaving during the day and unmaking it during the night. However, the day comes, when Odysseus arrives unrecognised at Ithaca, and is warned upon arrival that not all is as one would expect. At the same time, the Mnesteres, or suitors, have forced Penelope to set a final challenge to select the best of them. The challenge is to string and use the large bow that Odysseus had carved and made tensile, and shoot an arrow through the hanging hoops of a series of large battle axes. No other but Odysseus himself was able to tense the bow since he first crafted and used it, providing thus a formidable technical challenge. 

Odysseus enters the palace incognito, as a pig herder, and also makes a claim to the challenge, in concert with his son Telemachus. Penelope reacts at the prospect that a pig herder might win but is consoled by Telemachus who tells her to go to her rooms, where the poem finds her reminiscing of her husband. In the main hall of the palace, all the Mnesteres, in turn, fail to draw back and string the bow. Odysseus, however, tenses and strings the bow, passing the first challenge, then successfully uses the bow to shoot an arrow through the axes, providing the first sign that uncovers his identity. At the same time, he connects all the nodes of the battle axes in the line, by shooting his arrow through their metal rings, thus creating a chain. This is the second challenge, after the stringing of the bow that Odysseus must pass to prove he is the true king and husband of Penelope. 

The third challenge, remains: the elimination of all suitors. A battle ensues in which the Mnesteres are killed by Telemachus and Odysseus, and thus the third challenge is complete. 

The most architectonic metaphor of the poem takes place after the battle, at the moment Penelope needs to recognise her long lost husband, in rhapsody “Ψ”, i.e. the penultimate poem of Odyssey. She calls for a servant to move Odysseus’s bed outside its chamber and to prepare it so that he can rest. Upon hearing that, Odysseus immediately reacts in fury, claiming that moving the bed is an impossibility. The only person who could make the bed movable would be either an amazing craftsperson, or a god, as its base was made out of the root of an Olive tree, with its branches then used for the bed. Essentially the piece of furniture is immovable and immutable, it cannot be changed without being destroyed and it cannot be altered and taken out of the chamber without having its nature inadvertently changed – i.e., cutting the olive tree roots. 

Odysseus knows this as he was the one that constructed it, shaping its root from the body of the olive tree and crafting the bed. He then describes how he built the whole chamber around the bed. This knowledge acts as a crypto-sign that verifies his identity. Odysseus himself calls the information a “token” – a “sêma” – a sign that it is indeed him, as only he would know this sêma. In a sense, knowledge of this is the personal cryptographic key to the public cryptographic riddle that Penelope poses to verify his identity. 

The story acts as an architectonic metaphor for blockchain, in three layers. First, the token, both the information and the bed itself, cannot be taken out of its container (room) as its structure is interlinked with the material of the olive tree trunk and the earth that houses it. Second, it is Odysseus who is the architect of the crypto-immutability of the bed and the architecture around it, created by the most basic architectonic gestures: re-shaping nature into a construction. Thirdly, the intimacy between Penelope and Odysseus is encapsulated in the token of the bed, as knowledge of how the bed was made recreates trust between them – in the same kind of manner that blockchains become bearers of trust by encapsulating it cryptographically and encasing it in a third –medium, crafted, though, by a collective.  

The implication is that architectonic signs are cryptographically encased into their matter, and changing the physical matter changes the sign. Odysseus has created the first architectonic non-fungible token in physical form, where its meaning and its function and utility are interlinked through a cryptographic sema, in the same fashion that a non-fungible token exists through the cryptographic signature on a smart contract corresponding to a particular data structure. 

Deconstruction in Chinese 

Odysseus is not the only one who has created physical NFTs. Philosopher Byung-Chul Han describes in his book Shanzhai: Deconstruction in Chinese the relationship that exists in Asian cultures generally, but specifically in Chinese, between the master and the copy, where emulating or blatantly copying from the original is not seen as theft; instead, the form of the original is continually transformed by being deconstructed. [15] 

Byung-Chul Han presents a Chinese ink painting of a rock landscape, where a series of Chinese scholars have signed it using their jade seals and have scribbled onto it a poetic verse or two, as a parting gift to one of their friends leaving for another province. Within Chinese culture, the jade seal is the person, and the person is the jade seal. As such, the painting has now accumulated all the signatures and selves of the scholars, and has become unique in the same sense a non-fungible token is unique due to its cryptographic signature onto a smart contract. The difference from the simple non-fungible tokens that one finds by the thousand now on the internet, is that the Chinese painting scroll, according to Byung-Chul Han, is activated and becomes exclusive with the signature-seals and poems of the literati. It is a dynamic NFT, a unique object that is open to continuous addition, and exclusive and recursive interpretation.  

The act of creation, then, of the token, the unique sign, is the accumulation of all of the signatures of the scholars, whereby the painting cannot be reverted back to its original format; it is unique because it has been permanently changed. It is the same craft in Odysseus that takes the olive tree and makes into a bed, and then builds a room around the bed, an immobile, immutable sign, and its physical manifestation. The sêma of the significance of intimacy between Odysseus and Penelope is inextricable from the physical object of the bed, and the vector of change for the Chinese ink painting cannot return to its previous condition. 

This is where the similarities end though. While the craft is the same, in the Chinese ink scroll, the point of departure is not nature, but another artwork. The non-fungible token of the Chinese art scroll remains open to more additions and recursive poetry, new cryptographic signatures may be added to it, while the olive tree bed has a finality and a permanence. Odysseus changes nature to create his token, and the olive tree can never be the same. To create a bed and the foundations and the wall of the room, the tree needs to be transformed into architecture. The Chinese literati change a drawing, an artefact already in existence, which in the end remains still subject to further change. In the case of the olive tree, the hero is one, single, and the sêma revolves around his relationship with the world. For the Chinese literati and the Chinese ink scroll, the sêma is immutable towards the past but open to re-signing as a manner of recursive interpenetration. Significant mental shifts and attitudes is demanded to travel from crafting architecture like Odysseus, a lone genius who is king of his domain, to crafting architecture like a collective of Chinese literati, where a well balance collaboration is required from all. Both can be served by blockchain as a record of actions taken; however, it is only the collective, dynamic work open to continuing evolution that has the best future fit between blockchain and the discipline of architecture.  

“Zhen ji, an original, is determined not by the act of creation, but by an unending process” Byun Chul-Han  

The extractive nature of Architecture: Odysseus. 

The current dominant political economy of architecture is based on the Odysseus paradigm. The metabolism of the discipline is based on abundant natural resources and their transformation, and this parallels the irrational form of capitalist development.[16, 17] Essentially, the criticism shaped against the extractive nature of the discipline focuses on the ideological trap of continuously creating new designs and plans and sêmas, as Tafuri would have them, reliving the myth of Odysseus as a craftsperson, where every design is a prototype and every building is brand new, and where the natural environment is immutably transformed as the arrow of time moves forward. The repercussions of this stance are well documented in IPCC reports in terms of the carbon impact and waste production of the AEC industry.[18] 

In contrast, the “Space Caviar” collective posits that we should shift to a non-extractive architecture. They examine this shift via interviews with Benjamin Bratton, Chiara di Leone, and then Phineas Harper and Maria Smith. The focus within is a critical stance on the question of growth versus de-growth in the economy of architecture, where one needs a little bit more resolution to define the question in a positive term. Chiara di Leone correctly identifies design and economics as quasi-scientific disciplines and, as such, dismantles the mantra of de-growth as a homogenous bitter pill that we must all swallow. Instead, she proposes a spatial and geo-coupled economy, one that can take into account the local, decentralised aspects of each place and design an economy that is fit for that place. I would posit that as part of geo-coupled economy, an understanding of nature as a vector of a circular economy is needed 

Decentralisation is, of course, a core principle within the blockchain sociotechnical understanding, in the sense that participation in a blockchain is not regulated by institutions nor gatekeepers. However, before declaring it the absolute means to decentralisation, one needs to take a look at what is meant by decentralisation in economics and development, and the difference with decentralisation in blockchain, as there are differences in their meaning and essence that need alignment. 

Decentralisation and autonomy of local economies in the 70s 

Decentralisation as a term applied to the economy used to have a different meaning in the 70s. Papandreou, in his seminal book Paternalistic Capitalism, defines the decentralised economic process as a container for the parametric role of prices in the information system of a market economy.[19] In the same book, Papandreou, while interrogating the scientific dimensions of planning, calls for the decentralisation of power, in a regional, spatial function, rather than a functional one, after having set logical (in distinction to historical) rules for popular sovereignty and personal freedom. This is to counter the technocratic power establishment that emerges in representative democracy, as citizens provide legitimacy to the actions of the state. To further define decentralisation of power, he turns to regional planning and Greek visionary spatial planner Tritsis’ PhD thesis: “The third aim: decentralisation. This points to a world depending for its existence less on wheels and population uprootings and more on the harmonious relationship between man and his environment, social and natural”.[20] 

Based on this definition, Papandreou then builds the vision for a kind of governance consensus between decentralised regional units to form a “national” whole, with rules agreed and set between all units in a peer-to-peer basis. Within this, most importantly he calls for the liberal establishment of a guarantee of freedom of entry into occupations, in a kind of “integration of all forms of all forms of human work, of mental with manual, of indoors with outdoors” as envisioned by Tritsis [20]. Papandreou extends the vision of decentralisation in a global society and envisions the emergence of new poles of global power through regional decentralisation. As such, decentralisation used to mean something other than what it means within the context of blockchain – up until the first politics of “cypherpunk”. Decentralisation used to be a planning instrument and a political stance, rather than a technological strategy against the centralised power of established technocracies. Still, within the local, spatial geocoupling of economies, one can align the political decentralisation and the cypherpunk version of blockchain decentralisation, i.e. of no barriers to participation, of trust in the computer protocol, and the exclusion of authority of central political institutions, from which no one needs to ask permission. 

A new political economy for Architecture 

When one chains the spatial- and geo-coupled economy that Chiara di Leone proposes to decentralisation, both on the level of the politics of technocracies and the level of the operating system, i.e., the use of blockchains, it is possible to shape a new political economy in architecture, where computation regulates its heart. Encased within this shift is also a shift from the Odysseus craftsperson to the Chinese collective in terms of the “prototype” and our understanding of it. An economy where the artefact is open to recursive reinterpretation and is never finished can easily be transformed into a circular economy and adapted to minimise carbon. We have already prototyped early instances of collective digital factories for buildings,[21] where collectives of architects and digital design agents are incentivised through smart contracts to minimise the embodied and operational carbon impact of buildings: simply put the design teams earns in proportion to the increase of building performance and decrease in environmental impacts. 

To be able to create this regenerative renaissance for the discipline we need to make a series of changes to the manner in which the discipline is practised and taught. First, to integrate the function of the architect not only as the designer but as that of the orchestrator of the whole AEC industry. This requires that we abandon the notion of artistry, and embrace the notion of craft and engineering, including an understanding of materials and the economy. Second, to develop the infrastructure, products and services that can make that happen, where we also assume the responsibility and, why not, the liability for that integration. These first two actions will reverse the trend of abandoning the space of architecture to consultants where the erosion of our integrity has led to the glorification of form as our sole function. Thirdly, to shift our attention from star practices to collectives, as we embrace practices where wider stakeholders are considered. Odysseus needs to morph into a collective, where the artefact of architecture is conceived as ever changing, ever evolving, into circular thinking and economies. This might mean that alternative forms of practice emerge, where younger, more inclusive minds have more of a command and say on the purpose of an architecture company (and not a firm). Fourth, in the same pivot we as architects should reclaim the space lost, to embrace rigorously the new tools of the craft in the digital realm. It is not by chance that the title for senior programmers and digital network professionals is that of “architect”, as there is no other word that can specifically describe the people who orchestrate form-function-structure with one gesture. The age of machine-learning generative systems performing the trivial repetition of an architect is already here.  

Still, the automation we should embrace as a fifth point, since it allows the shaping and design of circular and peer-to-peer economies, is that of blockchain. This is the true Jiujitsu defence to the capitalist growth-at-all costs mantra.[22] Unless we embrace different, local, circular economies, we will not be able to effect the change we need in the discipline – and this also means that we might not necessarily need to be naive and simplistic about carbon impacts, for example by declaring that timber is always better than concrete. To embrace the automation of cryptoeconomics though, we need to first abandon the romantic idea of the architect as the sketch artist and embrace the idea of the architect as a collaborative economist. Only then will we be able to define ourselves the conditions for a regenerative architecture, in a decentralised, spatial-human-geo-coupled manner. 

References 

[1] T. Dounas, W. Jabi, D. Lombardi, “Non-Fungible Building Components – Using Smart Contracts for a Circular Economy in the Built Environment”, Designing Possibilities, SIGraDi, ubiquitous conference, XXV International conference of the Ibero-American society of digital Graphics (2021). 

[2] T. Dounas, W. Jabi, D. Lombardi, “Topology Generated Non-Fungible Tokens – Blockchain as infrastructure for a circular economy in architectural design”, Projections, 26th international conference of the association for Computer-Aided Architectural Design research in Asia, CAADRIA, Hong Kong, (2021).

[3] D. Lombardi, T. Dounas, L.H. Cheung, W. Jabi, “Blockchain for Validating the Design Process”, SIGraDI (2020), Medellin.

[4] T. Dounas, D. Lombardi, W. Jabi, ‘Framework for Decentralised Architectural Design:BIM and Blockchain Integration’, International Journal of Architectural Computing, Special issue eCAADe+SiGraDi “Architecture in the 4th Industrial Revolution” (2020) https://doi.org/10.1177/1478077120963376.

[5] T. Maver, “CAAD’s Seven Deadly Sins”, Sixth International Conference on Computer-Aided Architectural Design Futures [ISBN 9971-62-423-0] Singapore, 24-26 September 1995, pp. 21-22.

[6] Ethereum.Org, “Ethereum Whitepaper”, accessed 27 January 2022, https://ethereum.org. 

[7] N. Szabo, (1997): “Formalizing and Securing Relationships on Public Networks”, accessed 27 January 2022.  

[8] G. Wood, “Ethereum, a secure decentralised generalised transaction layer” (2022), https://ethereum.github.io/yellowpaper/paper.pdf

[9] S. Nakamoto, 2008, “Bitcoin: A Peer-to-Peer Electronic Cash System” (2008), originally at http://www.bitcoin.org/bitcoin.pdf.

[10] F. Vogelsteller, V. Buterin, EIP-20 Token Standard, https://eips.ethereum.org/EIPS/eip-20 

[11] W. Entriken, D. Shirley, J. Evans, N. Sachs, EIP-721 Token Standard, https://eips.ethereum.org/EIPS/eip-721

[12] Interplanetary filesystem documentation, https://docs.ipfs.io/ 

[13] Homer, E. Wilson trans., Odyssey (New York: W. W. Norton & Company, 2018) 

[14] Ζ. Όμηρος, Σιδέρης, Οδύσεια (Οργανισμός Εκδόσεως Διδακτικών βιβλίων Αθήνα, 1984).

[15] Byung-Chul Han, Deconstruction in Chinese, Translated by P. Hurd (Boston, MA: MIT press, 2017).

[16] Space Caviar collective, Non-Extractive Architecture, on designing without depletion (Venice: Sternberg Press, 2021).

[17] V.P. Aureli, “Intellectual Work and Capitalist Development: Origins and Context of Manfredo Tafuri’s Critique of Architectural Ideology”, the city as a project, http://thecityasaproject.org/2011/03/pier-vittorio-aureli-manfredo-tafuri/ March 2011.

[18]  P.R. Shukla, J. Skea, R. Slade, A. Al Khourdajie, R. van Diemen, D. McCollum, M. Pathak, S. Some, P. Vyas, R. Fradera, M. Belkacemi, A. Hasija, G. Lisboa, S. Luz, J. Malley (eds.), IPCC, 2022: Climate Change 2022: Mitigation of Climate Change. Contribution of Working Group III to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change (Cambridge, UK and New York, USA: Cambridge University Press, 2022) doi: 10.1017/9781009157926.

[19] A.G. Papandreou, Paternalistic Capitalism (Minneapolis: University of Minnesota Press, 1972).

[20] A. Tritsis, “The nature of planning regions” unpublished PhD thesis (Illinois Institute of Technology, Chicago, 1969).

[21] T. Dounas, D. Lombardi, W. Jabi, [2022] “Collective Digital Factories for Buildings”, T. Dounas, D. Lombardi, Ed., Blockchain for Construction (Singapore: Springer – Verlag, 2022) ISBN 9811937583.

[22] B. Tschumi, “Architects act as mediators between authoritarian power, or capitalist power, and some sort of humanistic aspiration. The economic and political powers that make our cities and our architecture are enormous. We cannot block them, but we can use another tactic, which I call the tactic of Judo, that is, to use the forces of one’s opponent in order to defeat it and transform it into something else … To what extent can we move away from a descriptive critical mode to a progressive, transformative mode for architecture?” Peter Eisenman and Cynthia Davidson, eds, anyplace symposium, ANY corporation, Montreal (1994).

Suggest a Tag for this Article
image source: Cantrell, Martin, Ellis 2017
image source: Cantrell, Martin, Ellis 2017
Wild Disequilibria 
Climate solutions, Climatic Energy, cognitive tools, Ecological Autonomy, landscape futures
Marantha Dawkins, Bradley Cantrell

mmd5mk@virginia.edu
Add to Issue
Read Article: 2324 Words

Climatic Energy and Ecological Autonomy 

There is no way back to the climate that we once knew: “our old world, the one that we have inhabited for the last 12,000 years, has ended”.[1] Accepting this end presents an opportunity to reframe considerations of risk, indeterminacy, and danger as questions of restructuring and rewilding; shifting the discussion of global warming from a matter of a scarcity of resources to an abundance of energy that can kick-start landscape futures. 

To engage this future, it is critical to set up some terms for how design will engage with the multitude of potential climates before us. Rather than working preventatively by designing solutions that are predicated on the simplification of the environment by models, we advocate for an experimentalism that is concerned with the proliferation of complexity and autonomy in the context of radical change. Earth systems are moving hundreds to thousands of times faster than they did when humans first documented them. This acceleration is distributed across such vast space and time scales that the consequences are ubiquitous but also unthinkable, which sets present-day Earth out of reach of existing cognitive tools. For example, twenty- to fifty-year decarbonisation plans are expected to solve problems that will unfold over million-year timescales.[2] These efforts are well-intentioned but poorly framed; in the relentless pursuit of a future that looks the same as the past, there is a failure to acknowledge that it is easier to destroy a system than it is to create one, a failure to acknowledge the fool’s errand of stasis that is embodied in preservation, and most importantly, a failure to recognise that climate change is not a problem to be solved.[3] Climate “solutions” are left conceptually bankrupt when they flatten complex contexts into one-dimensional problem sets that are doomed by unknowable variability. From succession, to extinction, to ocean biochemistry, to ice migration; our understanding of environmental norms has expired.[4] 

The expiration of our environmental understanding is underlined by the state of climate adaptation today – filled with moving targets, brittle infrastructures, increasing rates of failure, and overly complicated management regimes. These symptoms illustrate the trouble contemporary adaptation has escaping the cognitive dissonance of the manner in which knowledge about climate change is produced: the information has eclipsed its own ideological boundaries. This eclipse represents a crisis of knowledge, and therefore must give rise to a new climatic form. Changing how we think and how we see climatic energy asks us to make contact with the underlying texture and character of this nascent unruliness we find ourselves in, and the wilds that it can produce. 

Earth’s new wilds will look very different from the wilderness of the past. Classical wilderness is characterised by purity: it is unsettled, uncultivated, and untouched. But given the massive reshaping of ecological patterns and processes across the Earth, wilderness has become less useful, conceptually. Even in protected wilderness areas, “it has become a challenge to sustain ecological patterns and processes without increasingly frequent and intensive management interventions, including control of invading species, management of endangered populations, and pollution remediation”.[5] Subsequently, recent work has begun to focus less on the pursuit of historical nature and more on promoting ecological autonomy.[6, 7, 8] Wildness, on the other hand, is undomesticated rather than untouched. The difference between undomesticated and untouched means that design priorities change from maintaining a precious and pure environment to creating plural conditions of autonomy and distributed control that promote both human and non-human form. 

Working with wildness requires new ways of imagining and engaging futurity that operate beyond concepts of classical earth systems and the conventional modelling procedures that re-enact them, though conventional climate thinking, especially with the aid of computation, has achieved so much: “everything we know about the world’s climate – past, present, future – we know through models”.[9] Models take weather, which is experiential and ephemeral, abstract it into data over long periods of time, and assemble this data into patterns. Over time, these patterns have become increasingly dimensional. This way of understanding climate has advanced extremely quickly over the past few decades, enough that we can get incredibly high-resolution pictures (like the one below, which illustrates how water temperature swirls around the earth). Climate models use grids to organise their high-resolution, layered data and assign it rules about how to pass information to neighbouring cells. But the infinite storage capacity of the grid cells and the ways they are set up to handle rules and parameters create a vicious cycle, by enabling exponential growth toward greater and greater degrees of accuracy. Models get bigger and bigger, heavier and heavier, with more and more data; operating under the assumption that collecting enough information will eventually lead to the establishment of a perfect “control” earth,[10] and to an earth that is under perfect control. But this clearly isn’t the case, as for these models, more data means more uncertainty about the future. This is the central issue with the traditional, bottom-up climate knowledge that continues to pursue precision. It produces ever more perfect descriptions of the past while casting the future as more and more obscene and unthinkable. In other words, in a nonlinear world, looking through the lens of these bottom-up models refracts the future into an aberration.[11] 

Figure 1 – Global ocean temperatures modeled at Los Alamos National Labs illustrate how heat travels in swirling eddies across the globe. Image source: Los Alamos National Laboratories.

The technological structure of models binds us to a bizarre present. It is a state which forecloses the future in the same way that Narcissus found himself bound to his own reflection. When he saw his reflection in a river, he “[mistook] a mere shadow for a real body” and found himself transfixed by a “fleeting image”.[12] The climatic transfixion is the hypnotism of the immediate, the hypothetically knowable, which devalues real life in favour of an imaginary, gridded one. We are always just a few simulations from perfect understanding and an ideal solution. But this perfection is a form of deskilling which simulates not only ideas but thinking itself. The illusion of the ideal hypothetical solution, just out of reach, allows the technical image to operate not only as subject but as project;[13] a project of accuracy. And the project of making decisions about accuracy in models then displaces the imperative of making decisions about the environments that the models aim to describe by suspending us in the inertia of a present that is accumulating more data than it can handle. 

It is important to take note of this accumulation because too much information starts to take on its own life. It becomes a burden beyond knowledge,[14] which makes evident that “without forgetting it is quite impossible to live at all”.[15] But rather than forget accumulated data and work with the materiality of the present, we produce metanarratives via statistics. These metanarratives are a false consciousness. Issues with resolution, boundary conditions, parameterization, and the representation of physical processes represent technical barriers to accuracy, but the deeper problem facing accuracy is the inadequacy of old data to predict new dynamics. For example, the means and extremes of evapotranspiration, precipitation and river discharge have undergone such extreme variation due to anthropogenic climate change that fundamental concepts about the behaviour of earth systems for fields like water resource management are undergoing radical transformation.[16] Changes like this illustrate how dependence upon the windows of variability that statistics produce is no longer viable. This directly conflicts with the central conceit of models: that the metanarrative can be explanatory and predictive. In his recently published book, Justin Joque challenges the completeness of the explanatory qualities of statistics by underlining the conflicts between its mathematical and metaphysical assumptions.[17] He describes how statistics (and its accelerated form, machine learning) are better at describing imaginary worlds than understanding the real one. Statistical knowledge produces a way of living on top of reality rather than in it. 

Figure 2 – An illustration of how a climate model breaks the Earth surface and atmosphere into rectangular chunks within which data is stored, manipulated, and passed on to neighboring cells. Image source: ERA-Interim Archive.

The shells of modelled environments miss the materiality, the complexity and the energy of an ecosystem breaking apart and restructuring itself. The phase of a system that follows a large shift is known as a “back loop” in resilience ecology,[18, 19] and is an original and unstable period of invention that is highly contingent upon the materials left strewn about in the ruins of old norms. For ecological systems in transition, plant form, geological structure, biochemistry and raw materiality matter. These are landscape-scale issues that are not described in the abstractions of parts per million. High-level knowledge of climate change, while potentially relevant for some scales of decision-making, does not capture the differentiated impacts of its effects that are critical for structuring discussions around the specific ways that environments will grow and change, degrade or complexify through time. 

This is where wilds can play a role in structuring design experimentation. Wildness is unquestionably of reality, or a product of the physical world inhabited by corporeal form. Wilds as in situ experiments become model forms, which have a long epistemological history as a tool for complex and contingent knowledge. Physicists (and, here, conventional climate modellers) look to universal laws to codify, explain and predict events, but because medical and biological scientists, for example, do not have the luxury of stable universalism, they often use experiments as loose vehicles for projection. By “repeatedly returning to, manipulating, observing, interpreting, and reinterpreting certain subjects—such as flies, mice, worms, or microbes—or, as they are known in biology, ‘model systems’”, experimenters can acquire a reliable body of knowledge grounded in existing space and time.[20] This is how we position the project of wildness, which can be found from wastewater swamps, to robotically maintained coral reefs, to reclaimed mines and up-tempo forests. Experimental wilds, rather than precisely calculated infrastructures, have the potential to do more than fail at adapting to climate: they can serve “not only as points of reference and illustrations of general principles or values but also as sites of continued investigation and reinterpretation”.[21] 

There is a tension between a humility of human smallness and a lunacy in which we imagine ourselves engineering dramatic and effective climate fixes using politics and abstract principles. In both of these cases, climate is framed as being about control: control of narrative, control of environment. This control imaginary produces its own terms of engagement. Because its connections to causality, accuracy, utility, certainty and reality are empty promises, modelling loses its role as a scientific project and instead becomes a historical, political and aesthetic one. When the model is assumed to take on the role of explaining how climate works, climate itself becomes effectively useless. So rather than thickening the layer of virtualisation, a focus on wild experiments represents a turn to land and to embodied changes occurring in real time. To do this will require an embrace of aspects of the environment that have been marginalised, such as expanded autonomy, distributed intelligence, a confrontation of failure, and pluralities of control. This is not a back-to-the-earth strategy, but a focus on engagement, interaction and modification; a purposeful approach to curating climatic conditions that embraces the complexity of entanglements that form the ether of existence. 

References

[1] M. Davis, “Living on the Ice Shelf”, Guernica.org https://www.guernicamag.com/living_on_the_ice_shelf_humani/, (accessed May 01, 2022). 

[2] V. Masson-Delmotte, P. Zhai, A. Pirani, S.L. Connors, C. Péan, S. Berger, N. Caud, Y. Chen, L. Goldfarb, M.I. Gomis, M. Huang, K. Leitzell, E. Lonnoy, J.B.R. Matthews, T.K. Maycock, T. Waterfield, O. Yelekçi, R. Yu, and B. Zhou (eds.)]. IPCC, 2021: Climate Change 2021: The Physical Science Basis. Contribution of Working Group I to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change (Cambridge University Press, Cambridge, UK and New York, USA, 2021) doi:10.1017/9781009157896. 

[3] R, Holmes, “The problem with solutions”, Places Journal (2020). 

[4] V. Masson-Delmotte, P. Zhai, A. Pirani, S.L. Connors, C. Péan, S. Berger, N. Caud, Y. Chen, L. Goldfarb, M.I. Gomis, M. Huang, K. Leitzell, E. Lonnoy, J.B.R. Matthews, T.K. Maycock, T. Waterfield, O. Yelekçi, R. Yu, and B. Zhou (eds.)]. IPCC, 2021: Climate Change 2021: The Physical Science Basis. Contribution of Working Group I to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change (Cambridge University Press, Cambridge, UK and New York, USA, 2021) doi:10.1017/9781009157896. 

[5] B. Cantrell, L.J. Martin, and E.C. Ellis, “Designing autonomy: Opportunities for new wildness in the Anthropocene”, Trends in Ecology & Evolution 32.3 (2017), 156-166. 

[6] Ibid. 

[7] R.T. Corlett, “Restoration, reintroduction, and rewilding in a changing world”, Trends in Ecology & Evolution 31 (2016), 453–462 

[8] J. Svenning, et al., “Science for wilder Anthropocene: Synthesis and future directions for trophic rewilding research” Proceedings of the National Academy of Sciences 113 (2015), 898–906 

[9] P. N. Edwards, A vast machine: Computer models, climate data, and the politics of global warming (MIT Press, Cambridge, 2010). 

[10] P. N. Edwards, “Control earth”, Places Journal (2016). 

[11] J. Baudrillard, Cool Memories V: 2000-2004, (Polity, Oxford, 2006). 

[12] Ovid, Metamorphoses III, (Indiana University Press, Bloomington, 1955), 85 

[13] B. Han, Psychopolitics: Neoliberalism and new technologies of power, (Verso Books, New York, 2017). 

[14] B. Frohmann, Deflating Information, (University of Toronto Press, Toronto, 2016). 

[15] F. Nietzsche, On the Advantage and Disadvantage of History for Life, (1874). 

[16] P. C. D. Milly, et al. “Stationarity is dead: whither water management?”, Science 319.5863 (2008), 573-574. 

[17] J. Joque, Revolutionary Mathematics: Artificial Intelligence, Statistics and the Logic of Capitalism, (Verso Books, New York, 2022). 

[18] Gunderson and Holling, 2001; and Holling, “From complex regions to complex worlds”, Ecology and Society, 9, 1 (2004), 11. 

[19] S. Wakefield, Anthropocene Back Loop (Open Humanities Press, 2020). 

[20] A. N. H. Creager, et al., eds. Science without laws: model systems, cases, exemplary narratives (Duke University Press, Durham, 2007). 

[21] Ibid 

Suggest a Tag for this Article
Figures 12 – Planet Garden v.1.
Figures 12 – Planet Garden v.1.
Games and Worldmaking 
consensus reality, games, mediascape, videogames, Virtual, worldmaking
Damjan Jovanovic

damjan@dmjn.net
Add to Issue
Read Article: 3994 Words
Fig. 1 – Planet Garden v.1 screenshot, early game state

Worldmaking  

We live in a period of unprecedented proliferation of constructed, internally coherent virtual worlds, which emerge everywhere, from politics to video games. Our mediascape is brimming with rich, immersive worlds ready to be enjoyed and experienced, or decoded and exploited. One effect of this phenomenon is that we are now asking fundamental questions, such as what “consensus reality” is and how to engage with it. Another effect is that there is a need for a special kind of expertise that can deal with designing and organising these worlds – and that is where architects possibly have a unique advantage. Architectural thinking, as a special case of visual, analogy-based synthetic reasoning, is well positioned to become a crucial expertise, able to operate on multiple scales and in multiple contexts in order to map, analyse and organise a virtual world, while at the same time being able to introduce new systems, rules and forms to it.[1] 

A special case of this approach is something we can name architectural worldmaking,[2] which refers broadly to practices of architectural design which wilfully and consciously produce virtual worlds, and understand worlds as the main project of architecture. Architects have a unique perspective and could have a say in how virtual worlds are constructed and inhabited, but there is a caveat which revolves around questions of agency, engagement and control. Worldmaking is an approach to learning from both technically-advanced visual and cultural formats such as video games, as well as scientific ways of imaging and sensing, in order to be able to construct new, legitimate, and serious ways of seeing and modelling. 

These notions are central to the research seminar called “Games and Worldmaking”, first conducted by the author at SCI-Arc in summer of 2021, which focused on the intersection of games and architectural design, and foregrounded systems thinking as an approach to design. The seminar is part of the ongoing Views of Planet City project, in development at SCI-Arc for the Pacific Standard Time exhibition, which will be organised by the Getty Institute in 2024. In the seminar, we developed the first version of Planet Garden, a planetary simulation game, envisioned to be both an interactive model of complex environmental conditions and a new narrative structure for architectural worldmaking.  

Planet Garden is loosely based on Edward O. Wilson’s “Half-Earth” idea, a scenario where the entire human population of the world occupies a single massive city and the rest is left to plants and animals. The Half Earth is an important and very interesting thought experiment, almost a proto-design, a prompt, an idea for a massive, planetary agglomeration of urban matter which could liberate the rest of the planet to heal and rewild.  

The question of the game was, how could we actually model something like that? How do we capture all that complexity and nuance, how do we figure out stakes and variables and come up with consequences and conclusions? The game we are designing is a means to model and host hugely complex urban systems which unravel over time, while being able to legibly present an enormous amount of information visually and through the narrative. As a format, a simulation presents different ways of imaging the World and making sense of reality through models. 

The work on game design started as a wide exploration of games and precedents within architectural design and imaging operations, as well as abstract systems that could comprise a possible planetary model. The question of models and modelling of systems comes at the forefront and becomes contrasted to existing architectural strategies of representation.

Mythologizing, Representing and Modelling 

Among the main influences of this project were the drawings made by Alexander von Humboldt, whose work is still crucial for anyone with an interest in representing and modelling phenomena at the intersection of art and science.[3] If, in the classical sense, art makes the world sensible while science makes it intelligible, these images are a great example of combining these forms of knowledge. Scientific illustrations, Humboldt once wrote, should “speak to the senses without fatiguing the mind”.[4] His famous illustration of Chimborazo volcano in Ecuador shows plant species living at different elevations, and this approach is one of the very early examples of data visualisation, with an intent of making the world sensible and intelligible at the same time. These illustrations also had a strong pedagogical intent, a quality we wanted to preserve, and which can serve almost as a test of legibility.

Figure 2 – Alexander von Humboldt, Chimborazo volcano.

The project started with a question of imaging a world of nature in the Anthropocene epoch. One of the reasons it is difficult to really comprehend a complex system such as the climate crisis is that it is difficult to model it, which also means to visually represent it in a legible way which humans can understand. This crisis of representation is a well-known problem in literature on the Anthropocene, most clearly articulated in the book Against the Anthropocene, by T.J. Demos.[5] 

We do not yet have the tools and formats of visualising that can fully and legibly describe such a complex thing, and this is, in a way, also a failure of architectural imagination. The standard architectural toolkit is limited and also very dated – it is designed to describe and model objects, not “hyperobjects”. One of the project’s main interests was inventing new modalities of description and modelling of complex systems through the interactive software format, and this is one of the ideas behind the Planet Garden project.  

Contemporary representational strategies for the Anthropocene broadly fall into two categories, those of mythologising or objectivising. The first approach can be observed in the work of photographers such as Edward Burtynsky and Louis Helbig, where the subject matter of environmental disaster becomes almost a new form of the aesthetic sublime. The second strategy comes out of the deployment and artistic use of contemporary geospatial imaging tools. As is well understood by critics, contemporary geospatial data visualisation tools like Google Earth are embedded in a specific political and economic framework, comprising a visual system delivered and constituted by the post–Cold War and largely Western-based military-state-corporate apparatus. These tools offer an innocent-seeming picture that is in fact a “techno-scientific, militarised, ‘objective’ image”.[6] Such an image displaces its subject and frames it within a problematic context of neutrality and distancing. Within both frameworks, the expanded spatial and temporal scales of geology and the environment exceed human and machine comprehension and thus present major challenges to representational systems.  

Within this condition, the question of imaging – understood here as making sensible and intelligible the world of the Anthropocene through visual models – remains, and it is not a simple one. Within the current (broadly speaking) architectural production, this topic is mostly treated through the “design fiction” approach. For example, in the work of Design Earth, the immensity of the problem is reframed through a story-driven, narrative approach which centres on the metaphor, and where images function as story illustrations, like in a children’s book.[7] Another approach is pursued by Liam Young, in the Planet City project,[8] which focuses on video and animation as the main format. In this work, the imaging strategies of commercial science fiction films take the main stage and serve as anchors for the speculation, which serves a double function of designing a new world and educating a new audience. In both cases, it seems, the focus goes beyond design, as these constructed fictions stem from a wilful, speculative exaggeration of existing planetary conditions, to produce a heightened state which could trigger a new awareness. In this sense, these projects serve a very important educational purpose, as they frame the problem through the use of the established and accepted visual languages of storybooks and films.  

The key to understanding how design fictions operate is precisely in their medium of production: all of these projects are made through formats (collage, storybook, graphic novel, film, animation) which depend on the logic of compositing. Within this logic, the work is made through a story-dependent arrangement of visual components. The arrangement is arbitrary as it depends only on the demands of the story and does not correspond to any other underlying condition – there is no model underneath. In comparison, a game such as, for example, SimCity is not a fiction precisely because it depends on the logic of a simulation: a testable, empirical mathematical model which governs its visual and narrative space. A simulation is fundamentally different from a fiction, and a story is not a model. 

This is one of the reasons why it seems important to rethink the concept of design fiction through the new core idea of simulation.[9] In the book Virtual Worlds as Philosophical Tools, Stefano Gualeni traces a lineage of thinking about simulations to Espen Aarseth’s 1994 text called Hyper/Text/Theory, and specifically to the idea of cybertextuality. According to this line of reasoning, simulations contain an element not found in fiction and thus need an ontological category of their own: “Simulations are somewhere between reality and fiction: they are not obliged to represent reality, but they have an empirical logic of their own, and therefore should not be called fictions.”[10] This presents us with a fundamental insight into the use of simulations as the future of architectural design: they model internally coherent, testable worlds and go beyond mere fiction-making into worldmaking proper. 

Simulations, games and systems 

In the world of video games, there exists a genre of “serious” simulation games, which comprises games like Maxis software’s SimCity and The Sims, as well as some other important games like Sid Meier’s Civilization and Paradox Studio’s Stellaris. These games are conceptually very ambitious and extremely complex, as they model the evolution of whole societies and civilisations, operate on very long timescales, and consist of multiple nested models that simulate histories, economies and evolutions of different species at multiple scales. One important feature and obligation of this genre is to present a coherent, legible image of the world, to give a face to the immense complexity of the model. The “user interface” elements of these kinds of games work together to tell a coherent story, while the game world, rendered in full 3D in real time, provides an immersive visual and aesthetic experience for the player. Contrary to almost any other type of software, these interfaces are more indebted to the history of scientific illustration and data visualisation than they are to the history of graphic design. These types of games are open-ended and not bound to one goal, and there is rarely a clear win state.  

Figure 3 – SimEarth main user interface with theGaia window.

Another feature of the genre is a wealth of underlying mathematical models, each providing for the emergence of complexity and each carrying its own assumptions and biases. For example, SimCity is well known (and some would say notorious) for its rootedness in Jay Forrester’s Urban Dynamics approach to modelling urban phenomena, which means that its mathematical model delivers very specific urban conditions – and ultimately, a very specific vision of what a city is and could be.[11] One of the main questions in the seminar became how we might update this approach on two fronts: by rethinking the mathematical model, and by rethinking urban assumptions of the conceptual model. 

The work of the game designer Will Wright, the main designer behind the original SimCity, as well as The Sims and Spore, is considered to be at the origin of simulation games as a genre. Wright has developed a vast body of knowledge on modelling simulations, some of which he presented in his 2003 influential talk at the Game Developers Conference (GDC), titled “Dynamics for Designers”.[12] In this talk, Wright outlines a fully-fledged theory of modelling of complex phenomena for interactivity, focusing on topics such as “How we can use emergence to model larger possibility spaces with simpler components”. Some of the main points: science is a modelling activity, and until now, it has used traditional mathematics as its primary modelling method. This has some limits when dealing with complex dynamic and emergent systems. Since the advent of the computer, simulation has emerged as an alternative way of modelling. These are very different: in Wright’s view, maths is a more linear process, with complex equations; simulation is a more parallel process with simpler components interacting together. Wright also talks about stochastic (random probability distribution) and Monte Carlo (“brute force”) methods as examples of the simulation approach. 

Figure 4 – SimEarth civilisation model with sliders.

Wright’s work was a result of a deep interest in exploring how non-linear models are constructed and represented within the context of interactive video games, and his design approach was to invent novel game design techniques based directly on System Dynamics, a discipline that deals with the modelling of complex, unpredictable and non-linear phenomena. The field has its roots in the cybernetic theories of Norbert Wiener, but it was formalised and created in the mid-1950s by Professor Jay Forrester at MIT, and later developed by Donella H. Meadows in her seminal book Thinking in Systems.[13]  

System dynamics is an approach to understanding the non-linear behaviour of complex systems over time using stocks, flows, internal feedback loops, table functions and time delays.[14,15] Forrester (1918–2016) was an American computer engineer and systems scientist, credited as the founding father” of system dynamics. He started by modelling corporate supply chains and went on to model cities by describing “the major internal forces controlling the balance of population, housing and industry within an urban area”, which he claimed could “simulate the life cycle of a city and predict the impact of proposed remedies on the system”.[16] In the book Urban Dynamics, Forrester had turned the city into a formula with just 150 equations and 200 parameters.[17] The book was very controversial, as it implied extreme anti-welfare politics and, through its “objective” mathematical model, promoted neoliberal ideas of urban planning. 

In another publication, called World Dynamics, Forrester presented “World2”, a system dynamics model of our world which was the basis of all subsequent models predicting a collapse of our socio-technological-natural system by the mid 21st century. Nine months after World Dynamics, a report called Limits to Growth was published, which used the “World3” computer model to simulate the consequences of interactions between the Earth and human systems. Commissioned by the Club of Rome, the findings of the study were first presented at international gatherings in Moscow and Rio de Janeiro in the summer of 1971, and predicted societal collapse by the year 2040. Most importantly, the report put the idea of a finite planet into focus. 

Figure 5 – Jay W. Forrester, World2 model, base for all subsequent predictions of collapse such as Limits to Growth.

The main case study in the seminar was Wright’s 1990 game SimEarth, a life simulation video game in which the player controls the development of a planet. In developing SimEarth, Wright worked with the English scientist James Lovelock, who served as an advisor and whose Gaia hypothesis of planetary evolution was incorporated into the game. Continuing the systems dynamics approach developed for SimCity, SimEarth was an attempt to model a scientifically accurate approximation of the entire Earth system through the application of customised systems dynamics principles. The game modelled multiple interconnected systems and included realistic feedback between land, ocean, atmosphere, and life itself. The game’s user interface even featured a “Gaia Window”, in direct reference to the Gaia theory which states that life plays an intimate role in planetary evolution and the regulation of planetary systems. 

One of the tutorial levels for the SimEarth featured a playable model of Lovelock’s “Daisyworld” hypothesis, which postulates that life itself evolves to regulate its environment, forming a feedback loop and making it more likely for life to thrive. During the development of a life-detecting device for NASA’s Viking lander mission to Mars, Lovelock made a profound observation, that life tends to increase the order of its surroundings, and that studying the atmospheric composition of a planet will provide evidence enough of life’s existence. Daisyworld is a simple planetary model designed to show the long-term effects of coupling and interdependence between life and its environment. In its original form, it was introduced as a defence against criticism that his Gaia theory of the Earth as a self-regulating homeostatic system requires teleological control rather than being an emergent property. The central premise, that living organisms can have major effects on the climate system, is no longer controversial. 

Figure 6 – SimEarth full planetary model.

In SimEarth, the planet itself is alive, and the player is in charge of setting the initial conditions as well as maintaining and guiding the outcomes through the aeons. Once a civilisation emerges, the player can observe the various effects, such as the impacts of changes in atmospheric composition due to fossil fuel burning, or the temporary expansion of ice caps in the aftermath of a major nuclear war. SimEarth’s game box came with a 212-page game manual that was at once a comprehensive tutorial on how to play and an engrossing lesson in Earth sciences: ecology, geology, meteorology and environmental ethics, written in accessible language that anyone could understand.  

Figures 7&8 – Planet Garden simplified model and main game loop.

SimEarth and other serious simulation games in general represent a way that games could serve a function of public education while remaining a form of popular entertainment. This genre also represents an incredible validation of claims that video games can be valuable cultural artifacts. Ian Bogost writes: “This was a radical way of thinking about video games: as non-fictions about complex systems bigger than ourselves. It changed games forever – or it could have, had players and developers not later abandoned modelling systems at all scales in favor of representing embodied, human identities.”[18] 

Lessons that architectural design can learn from these games are many and varied, the most important one being that it is possible to think about big topics by employing models and systems while maintaining an ethos of exploration, play and public engagement. In this sense, one could say that a simulation game format might be a contemporary version of Humboldt’s illustration, with the added benefit of interactivity; but as we have seen, there is a more profound, crucial difference – this format goes beyond just a representation, beyond just a fiction, into worldmaking.  

As a result of this research, the students in the seminar utilised Unreal Engine to create version one (v.1) of Planet Garden, a multi-scalar, interactive, playable model of a self-sustaining, wind and solar-powered robotic garden, set in a desert landscape. The simulation was envisioned as a kind of reverse city builder, where a goal of the game is to terraform a desert landscape by deploying different kinds of energy-producing technologies until the right conditions are met for planting and the production of oxygen. The basic game loop is based on the interaction between the player and four main resources: energy, water, carbon, and oxygen. In the seminar, we also created a comprehensive game manual. The aims of the project were to learn how to model dynamic systems and to explore how game workflows can be used as ways to address urban issues. 

Planet Garden is projected to become a big game for the Getty exhibition; a simulation of a planetary ecosystem as well as a city for 10 billion people. We aim to model various aspects of the planetary city, and the player will be able to operate on multiple spatial sectors and urban scales. The player can explore different ways to influence the development and growth of the city and test many scenarios, but the game will also run on its own, so that the city can exist without direct player input. Our game utilises core design principles that relate to system dynamics, evolution, environmental conditions, and change. A major point is the player’s input and decision-making process, which influence the outcome of the game. The game will also be able to present conditions and consequences of this urban thought experiment, as something is always at stake for the player.  

The core of the simulation-as-a-model idea is that design should have testable consequences. The premise of the project is not to construct a single truthful, total model of an environment but to explore ways of imaging the world through simulation and open new avenues for holistic thinking about interdependence of actors, scales and world systems. If the internet ushered a new age of billions of partial identarian viewpoints, all aggregating into an inchoate world gestalt, is it a time to rediscover a new image of the interconnected world? 

Figure 9 – Planet Garden screenshot, late game state.
Figures 10–16 – Planet Garden v.1.

References

[1] For a longer discussion on this, see O. M. Ungers, City Metaphors, (Cologne: Buchhandlung Walther Konig, 2011). For the central place of analogies in scientific modeling, see M. Hesse, Models and Analogies in Science, and also Douglas Hofstadter, Surfaces and Essences: Analogy as the Fuel and Fire of Thinking (Basic Books, 2013). 

[2] The term “worldmaking” comes from Nelson Goodman’s book Ways of Worldmaking, and is used here to be distinguished from worldbuilding, a more narrow, commercially oriented term. 

[3] For a great introduction to the life and times of Alexander Von Humboldt, see A. Wulf, The Invention of Nature: Alexander von Humboldt’s New World (New York: Alfred A. Knopf, 2015).

[4] Quoted in H. G. Funkhouser, “Historical development of the graphical representation of statistical data”, Osiris 3 (1937), 269–404.

[5] T. J. Demos, Against The Anthropocene (Berlin: Sternberg Press, 2016).

[6] T. J. Demos, Against The Anthropocene (Berlin: Sternberg Press 2016).

[7] Design Earth, Geostories, The Planet After Geoengineering (Barcelona: Actar, 2019 and 2021). 

[8] L. Young, Planet City, (Melbourne: Uro Publications, 2020).

[9] For an extended discussion of the simulation as a format, see D. Jovanovic, “Screen Space, Real Time”, Monumental Wastelands 01, eds. D. Lopez and H. Charbel (2022). 

[10] S. Gualeni, Virtual Worlds as Philosophical Tools, (Palgrave Macmillan, 2015) 

[11] For an extended discussion on this, see Clayton Ashley, The Ideology Hiding in SimCity’s Black Box, https://www.polygon.com/videos/2021/4/1/22352583/simcity-hidden-politics-ideology-urban-dynamics 

[12] W. Wright, Dynamics for Designers, GDC 2003 talk, https://www.youtube.com/watch?v=JBcfiiulw-8.

[13] D. H. Meadows, Thinking in Systems, (White River Junction: Chelsea Green Publishing, 2008). 

[14] Arnaud M., “World2 model, from DYNAMO to R”, Towards Data Science, 2020, https://towardsdatascience.com/world2-model-from-dynamo-to-r-2e44fdbd0975.

[15] Wikipedia, “System Dynamics”, https://en.wikipedia.org/wiki/System_dynamics.

[16] Forrester, Urban Dynamics (Pegasus Communications, 1969).

[17] K. T. Baker, “Model Metropolis”, Logic 6, 2019, https://logicmag.io/play/model-metropolis.

[18] I. Bogost, “Video games Are Better Without Characters”, The Atlantic (2015), https://www.theatlantic.com/technology/archive/2015/03/video-games-are-better-without-characters/387556.

Suggest a Tag for this Article
Figure 10 – A satellite image of the Meeting of Waters in the Amazon region in Brazil. The original image shows the confluence of two rivers that flow together but do not mix. Pixel operations driven by agents change the composition of the landscape.
Figure 10 – A satellite image of the Meeting of Waters in the Amazon region in Brazil. The original image shows the confluence of two rivers that flow together but do not mix. Pixel operations driven by agents change the composition of the landscape.
Situatedness: A Critical Data Visualisation Practice
Critical Practice, Data Feminism, Data Visualisation, Decolonisation, Situatedness
Catherine Griffiths

catgriff@umich.edu
Add to Issue
Read Article: 5472 Words

Data and its visualisation have been an important part of architectural design practice for many years, from data-driven mapping to building information modelling to computational design techniques, and now through the datasets that drive machine-learning tools. In architectural design research, data-driven practices can imbue projects with a sense of scientific rigour and objectivity, grounding design thinking in real-world environmental phenomena.

More recently, “critical data studies” has emerged as an influential interdisciplinary discourse across social sciences and digital humanities that seeks to counter assumptions made about data by invoking important ethical and socio-political questions. These questions are also pertinent for designers who work with data. Data can no longer be used as a raw and agnostic input to a system of analysis or visualisation without considering the socio-technical system through which it came into being. Critical data studies can expand and deepen the practice of working with data, enabling designers to draw on pertinent ideas in the emerging landscape around data ethics. Data visualisation and data-driven design can be situated in more complex creative and critical assemblages. This article draws on several ideas from critical data studies and explores how they could be incorporated into future design and visualisation projects.

Critical Data Studies

The field of critical data studies addresses data’s ethical, social, legal, economic, cultural, epistemological, political and philosophical conditions, and questions the singularly scientific empiricism of data and its infrastructures. By applying methodologies and insights from critical theory, we can move beyond a status quo narrative of data as advancing a technical, objective and positivist approach to knowledge.

Historical data practices have promoted false notions of neutrality and universality in data collection, which has led to unintentional bias being embedded into data sets. This recognition that data is a political space was explored by Lisa Gitelman in “Raw Data” Is an Oxymoron, in which she argues that data does not exist in a raw state, such as a natural resource, but is always undergoing a process of interpretation.[1] The rise of big data is a relatively new phenomenon. Data harvested from extensive and nuanced facets of people’s lives signifies a shift in how we approach the implications for power asymmetries and ethics. This relationship between data and society is tied together through critical data studies.

The field emerged from the work of Kate Crawford and danah boyd, who in 2012 formulated a series of critical provocations given the rise of big data as an imperious phenomenon, highlighting its false mythologies.[2] Rob Kitchen’s work has appraised data and data science infrastructures as a new social and cultural territory.[3] Andrew Iliadis and Federica Russo use the theory of assemblages to capture the multitude of ways that already-composed data structures inflect and interact with society.[4] These authors all seek to situate data in a socio-technical framework from which data cannot be abstracted. For them, data is an assemblage, a cultural text, and a power structure that must be available for interdisciplinary interpretation.

Data Settings and Decolonisation

Today, with the increasing access to large data sets and the notion that data can be extracted from almost any phenomena, data has come to embody a sense of agnosticism. Data is easily abstracted from its original context, ported to somewhere else, and used in a different context. Yanni Loukissas is a researcher of digital media and critical data studies who explores concepts of place and locality as a means of critically working with data. He argues that “data have complex attachments to place, which invisibly structure their form and interpretation”.[5] Data’s meaning is tied to the context from which it came. However, the way many people work with data today, especially in an experimental context, assumes that the origin of a data set does not hold meaning and that data’s meaning does not change when it is removed from its original context.

In fact, Loukissas claims, “all data are local”, and the reconsideration of locality is an important critical data tactic.[6] Asking where data came from, who produced it, when, and why, what instruments were used to collect it, what kind of conditioned audience was it intended for, and how might these invisible attributes inform its composition and interpretation are all questions that reckon with a data set’s origin story. Loukissas proposes “learning to analyse data settings rather than data sets”.[7] The term “data set” evokes a sense of the discrete, fixed, neutral, and complete, whereas the term “data setting” counters these qualities and awakens us to a sense of place, time, and the nuances of context.

From a critical data perspective, we can ask why we strive for the digital and its data to be so place-agnostic, a totalising system of norms that erases the myriad of cultures? The myth of placelessness in data implies that everything can be treated equally by immutable algorithms. Loukissas concludes, “[o]ne reason universalist aspirations for digital media have thrived is that they manifest the assumptions of an encompassing and rarely questioned free market ideology”.[8] We should insist upon data’s locality and multiple and specific origins to resist such an ideology.

“If left unchallenged, digital universalism could become a new kind of colonialism in which practitioners at the ‘periphery’ are made to conform to the expectations of a dominant technological culture.

If digital universalism continues to gain traction, it may yet become a self-fulfilling prophecy by enforcing its own totalising system of norms.”[9]

Loukissas’ incorporation of place and locality into data practices comes from the legacy of postcolonial thinking. Where Western scientific knowledge systems have shunned those of other cultures, postcolonial studies have sought to illustrate how all knowledge systems are rooted in local- and time-based practices and ideologies. For educators and design practitioners grappling with how to engage in the emerging discourse of decolonisation in pedagogy, data practices and design, Loukissas’ insistence on reclaiming provenance and locality in the way we work with abstraction is one way into this work.

Situated Knowledge and Data Feminism

Feminist critiques of science have also invoked notions of place and locality to question the epistemological objectivity of science. The concept of situated knowledge comes from Donna Haraway’s work to envision a feminist science.[10] Haraway is a scholar of Science and Technology Studies and has written about how feminist critiques of masculinity, objectivity and power can be applied to the production of scientific knowledge to show how knowledge is mediated by and historically grounded in social and material conditions. Situated knowledge can reconcile issues of positionality, subjectivity, and their inherently contestable natures to produce a greater claim to objective knowledge, or what Sarah Harding has defined as “strong objectivity”.[11] Concepts of situatedness and strong objectivity are part of feminist standpoint theory. Patricia Hill Collins further proposes that the intersectional marginalised experiences of women and minorities – black women, for example – offer a distinctive point of view and experience of the world that should serve as a source for new knowledge that is more broadly applicable.[12]

How can we take this quality of situatedness from feminist epistemology and apply it to data practices, specifically the visualisation of data? In their book Data Feminism, Catherine D’Ignazio and Lauren Klein define seven principles to apply feminist thinking to data science. For example, principle six asks us to “consider context” when making sense of correlations when working with data.

“Rather than seeing knowledge artifacts, like datasets, as raw input that can be simply fed into a statistical analysis or data visualisation, a feminist approach insists on connecting data back to the context in which they were produced. This context allows us, as data scientists, to better understand any functional limitations of the data and any associated ethical obligations, as well as how the power and privilege that contributed to their making may be obscuring the truth.”[13]

D’Ignazio and Klein argue that “[r]efusing to acknowledge context is a power play to avoid power. It is a way to assert authoritativeness and mastery without being required to address the complexity of what the data actually represent”.[14] Data feminism is an intersectional approach to data science that counters the drive toward optimisation and convergence in favour of addressing the stakes of intersectional power in data.

Design Practice and Critical Data Visualisation

The visualisation of data is another means of interpreting data. Data visualisation is part of the infrastructure of working with data and should also be open to critical methods. Design and visualisation are processes through which data can be treated with false notions of agnosticism and objectivity, or can be approached critically, questioning positionality and context. Even when data practices explore creative, speculative, and aesthetic-forward techniques, this can extend and enrich the data artefacts produced. Therefore, we should critically reflect on the processes and infrastructures through which we design and aestheticise data.

How can we take the concept of situatedness that comes out of critical data studies and deploy it in creative design practice? What representational strategies support thinking through situatedness as a critical data practice? Could we develop a situated data visualisation practice?

The following projects approach these questions using design research, digital humanities and critical computational approaches. They are experiments that demonstrate techniques in thinking critically about data and how that critique can be incorporated into data visualisation. The work also expands upon the visualisation of data toward the visualisation of computational processes and software infrastructure that engineer visualisations. There is also a shift between exploring situatedness as a notion of physical territory toward a notion of socio-political situatedness. The following works all take the form of short films, animations and simulations.

Alluvium

Figure 1 – A situating shot of the Gower Gulch site, to capture both scales of assessment: wide-angle photography shows the geomorphological consequences of flood water on the landscape, whilst macro photography details the granular role of sedimentation.

Cinematic data visualisation is a practice of visually representing data. It incorporates cinematic aesthetics, including an awareness of photography’s traditional aspects of framing, motion and focus, with contemporary virtual cinematography’s techniques of camera-matching and computer-generated graphics. This process intertwines and situates data in a geographic and climatic environment, and retains the data’s relationship with its source of origin and the relevance that holds for its meaning.

As a cinematic data visualisation, Alluvium presents the results of a geological study on the impact of diverted flood waters on a sediment channel in Death Valley, California. The scenes took their starting point from the research of Dr Noah Snyder and Lisa Kammer’s 2008 study.[15] Gower Gulch is a 1941 diversion of a desert wash that uncovers an expedited view of geological changes that would normally have taken thousands of years to unfold but which have evolved at this site in several decades due to the strength of the flash floods and the conditions of the terrain.

Gower Gulch provides a unique opportunity to see how a river responds to an extreme change in water and sediment flow rates, presenting effects that could mimic the impact of climate change on river flooding and discharge. The wash was originally diverted to prevent further flooding and damage to a village downstream; today, it presents us with a microcosm of geological activity. The research paper presents data as historical water flow that can only be measured and perceived retrospectively through the evidence of erosion and sediment deposition at the site.

Figure 2 – A situated visualisation combining physical cinematography and virtual cinematography to show a particle simulation of flood waters. 

Alluvium’s scenes are a hybrid composition of film and digitally produced simulations that use the technique of camera-matching. The work visualises the geomorphological consequences of water beyond human-scale perception. A particle animation was developed using accurate topographic models to simulate water discharge over a significant period. Alluvium compresses this timeframe, providing a sense of a geological scale of time, and places the representation and simulation of data in-situ, in its original environment.

In Alluvium, data is rendered more accessible and palpable through the relationship between the computationally-produced simulation of data and its original provenance. The data’s situatedness takes place through the way it is embedded into the physical landscape, its place of origin, and how it navigates its source’s nuanced textures and spatial composition.

The hybridised cinematic style that is produced can be deconstructed into elements of narrative editing, place, motion, framing, depth of field and other lens-based effects. The juxtaposition of the virtual and the real through a cinematic medium supports a recontextualisation of how data can be visualised and how an audience can interpret that visualisation. In this case, it is about geographic situatedness, retaining the sense of physical and material qualities of place, and the particular nuances of the historical and climatic environment.

Figure 3 – The velocity of the particles is mapped to their colouration, visualising water’s characteristic force, directionality and turbulence. The simulation is matched to a particular site of undercut erosion, so that the particles appear to carve the physical terrain.

Death Valley National Park, situated in the Mojave Desert in the United States, is a place of extreme conditions. It has the highest temperature (57° Celsius) and the lowest altitude (86 metres below sea level) to be recorded in North America. It also receives only 3.8 centimetres of rainfall annually, registering it as North America’s driest place. Despite these extremes, the landscape has an intrinsic relationship with water. The territorial context is expressed through the cinematic whilst also connecting the abstraction of data to its place of origin.

For cinematic data visualisation, these elements are applied to the presentation of data, augmenting it into a more sensual narrative that loops back to its provenance. As a situated practice, cinematic data visualisation foregrounds a relationship with space and place. The connection between data and the context from which it was derived is retained, rather than the data being extracted, abstracted, and agnostically transferred to a different context in which site-specific meaning can be lost. As a situated practice, cinematic data visualisation grapples with ways to foreground relationships between the analysis and representation of data and its environmental and local situation.

LA River Nutrient Visualization

Figure 4 – Reconstruction of the site of study, the Los Angeles River watershed from digital elevation data, combined with nutrient data from river monitoring sites.

Another project in the same series, the LA River Nutrient Visualization, considers how incorporating cinematic qualities into data visualisation can support a sense of positionality and perspective amongst heterogeneous data sets. This can be used to undermine data’s supposed neutrality and promote an awareness of data containing various concerns and stakes of different groups of people. Visualising data’s sense of positionality and perspective is another tactic to produce a sense of situatedness as a critical data visualisation practice. Whilst the water quality data used in this project appeared the same scientifically, it was collected by different groups: locally organised communities versus state institutions. The differences in why the data was collected, and by whom, have a significance, and the project was about incorporating that in the representational strategy of data visualisation.

This visualisation analyses nutrient levels, specifically nitrogen and phosphorus, in the water of the Los Angeles River, which testify to pollution levels and portray the river’s overall health. Analysed spatially and animated over time, the data visualisation aims to provide an overview of the available public data, its geographic, seasonal and annual scope, and its limitations. Three different types of data were used: surface water quality data from state and national environmental organisations, such as the Environmental Protection Agency and the California Water Science Center; local community-organised groups, such as the River Watch programme by Friends of the Los Angeles River and citizen science group Science Land’s E-CLAW project; and national portals for remotely-sensed data of the Earth’s surface, such as the United States Geological Survey.

The water quality data covers a nearly-50-year period from 1966 to 2014, collected from 39 monitoring stations distributed from the river’s source to its mouth, including several tributaries. Analysis showed changes in the river’s health based on health department standards, with areas of significantly higher concentrations of nutrients that consistently exceeded Water Quality Objectives.

Figure 5 – Virtual cameras are post-processed to add lens-based effects such as shallow depth of field and atmospheric lighting and shadows. A low, third-person perspective is used to position the viewer with the data and its urban context.

The water quality data is organised spatially using a digital elevation model (DEM) of the river’s watershed to create a geo-referenced 3D terrain model that can be cross-referenced with any GPS-associated database. A DEM is a way of representing remotely-captured elevation, geophysical, biochemical, and environmental data about the Earth’s surface. The data itself is obtained by various types of cameras and sensors attached to satellites, aeroplanes and drones as they pass over the Earth.

Analysis of the water data showed that the state- and national-organised data sets provided a narrow and inconsistent picture of nutrient levels in the river. Comparatively, the two community-organised data sets offered a broader and more consistent approach to data collection. The meaning that emerged in this comparison of three different data sets, how they were collected, and who collected them ultimately informed the meaning of the project, which was necessary for a critical data visualisation.

Visually, the data was arranged and animated within the 3D terrain model of the river’s watershed and presented as a voxel urban landscape. Narrative scenes were created by animating slow virtual camera pans within the landscape to visualise the data from a more human, low, third-person point of view. These datascapes were post-processed with cinematic effects: simulating a shallow depth of field, ambient “dusk-like” lighting, and shadows. Additionally, the computer-generated scenes were juxtaposed with physical camera shots of the actual water monitoring sites, scenes that were captured by a commercial drone. Unlike Alluvium, the two types of cameras are not digitally matched. The digital scenes locate and frame the viewer within the data landscape, whereas physical photography provides a local geographic reference point to the abstracted data. This also gives the data a sense of scale and invites the audience to consider each data collection site in relation to its local neighbourhood. The representational style of the work overall creates a cinematic tempo and mood, informing a more narrative presentation of abstract numerical data.

Figure 6 – Drone-captured aerial video of each data site creates an in-situ vignette of the site’s local context and puts the data back into communication with its local neighbourhood. This also speaks to the visualisation’s findings that community organisation and citizen science was a more effective means of data collection and should be recognised in the future redevelopment of the LA River.

In this cinematic data visualisation, situatedness is engaged through the particular framing and points of view established in the scenes and through the juxtaposition of cinematography of the actual data sites. Here, place is social; it is about local context and community rather than a solely geographical sense of place. Cinematic aesthetics convey the “data setting” through a local and social epistemic lens, in contrast to the implied frameless and positionless view with which state-organised data is collected, including remotely-sensed data.

All the water data consisted of scientific measurements of nitrogen and phosphorus levels in the river. Numerically, the data is uniform, but the fact that different stakeholders collected it with different motivations and needs affects its interpretation. Furthermore, the fact of whether data has been collected by local communities or state institutions informs its epistemological status concerning agency, motivation, and environmental care practices.

Context is important to the meaning that the data holds, and the visualisation strategy seeks to convey a way to think about social and political equity and asymmetry in data work. The idea of inserting perspective and positionality into data is an important one. It is unusual to think of remotely-sensed data or water quality data as having positionality or a perspective. Many instruments of visualisation present their artefacts as disembodied. Remotely-sensed data is usually presented as a continuous view from everywhere and nowhere simultaneously. However, feminist thinking’s conception of situated knowledge asks us to remember positionality and perspective to counter the sense of framelessness in the traditional tools of data collection and analysis.

Cinema for Robots

Figure 7 – A point cloud model of the site underneath the Colorado Street Bridge in Pasadena, CA, showing a single camera position from the original video capture.

Cinema for Robots was the beginning of an exploration into the system that visualises data, rather than data visualisation itself being the outcome. Cinema For Robots presents a technique to consider how to visualise computational process, instead of presenting data as only a fixed and retrospective artefact. The project critically investigates the technique of photogrammetry, using design to reflexively consider positionality in the production of a point cloud. In this case, the quality of situatedness is created by countering the otherwise frameless point cloud data visualisation with animated recordings of the body’s position behind the camera that produced the data.

Photogrammetry is a technique in which a 3D model is computationally generated from a series of digital photographs of a space (or object). The photographs are taken systematically from many different perspectives and overlapping at the edges, as though mapping all surfaces and angles of the space. From this set of images, an algorithm can compute an accurate model of the space represented in the images, producing a point cloud. In a point cloud, every point has a 3D coordinate that relates to the spatial organisation of the original space. Each point also contains colour data from the photographs, similarly to pixels, so the point cloud also has a photographic resemblance. In this project, the point cloud is a model of a site underneath the Colorado Street Bridge in Pasadena, California. It shows a mixture of overgrown bushes and large engineered arches underneath the bridge.

Figure 8 – A perspective of the bridge looking upwards with two camera positions that animate upwards in sync with the video.

The image set was created from a video recording of the site from which still images were extracted. This image set was used as the input for the photogrammetry algorithm that produced the point cloud of the site. The original video recordings were then inserted back into the point cloud model, and their camera paths were animated to create a reflexive loop between the process of data collection and the data artefact it produced.

With photogrammetry; data, computation, and its representation are all entangled. Similarly to remotely-sensed data sets, the point cloud model expresses a framelessness, a perspective of space that appears to achieve, as Haraway puts it, “the god trick of seeing everything from nowhere”.[16] By reverse-engineering the camera positions and reinserting them into the point cloud of spatial data points, there is a reflexive computational connection between data that appears perspectiveless and the human body that produced it. In the series of animations comprising the project, the focus is on the gap between the capturing of data and the computational process that visualises it. The project also juxtaposes cinematic and computational aesthetics to explore the emerging gaze of new technologies.

Figure 9 – Three camera positions are visible and animated simultaneously to show the different positions of the body capturing the video that was the input data for the point cloud.

The project is presented as a series of animations that embody and mediate a critical reflection on computational process. In one animation, the motion of a hand-held camera creates a particular aesthetic that further accentuates the body behind the camera that created the image data set. It is not a smooth or seamless movement but unsteady and unrefined. This bodily camera movement is then passed on to the point cloud model, rupturing its seamlessness. The technique is a way to reinsert the human body and a notion of positionality into the closed-loop of the computational process. In attempting to visualise the process that produces the outcome, reflexivity allows one to consider other possible outcomes, framings, and positions. The animations experiment with a form of situated computational visualisation.

Automata I + II

Figure 10 – A satellite image of the Meeting of Waters in the Amazon region in Brazil. The original image shows the confluence of two rivers that flow together but do not mix. Pixel operations driven by agents change the composition of the landscape.

This work took the form of a series of simulations that critically explored a “computer vision code library” in an open-ended way. The simulations continued an investigation into computational visualisation rather than data visualisation. The process sought to reverse-engineer machine vision software – an increasingly politically contentious technology – and critically reflect on its internal functionality. Here, source code is situated within a social and political culture rather than a neutral and technical culture. Instead of using a code library instrumentally to perform a task, the approach involves critically reading source code as a cultural text and developing reflexive visualisations that explore its functions critically.

Many tools we use in design and visualisation were developed in the field of computer vision, which engineers how computers see and make sense of the world, including through camera-tracking and the photogrammetry discussed previously. In Automata I, the OpenCV library (an open-source computer vision code library) was used. Computer vision is comprised of many functions layered on top of each other acting as matrices that filter and analyse images in different ways to make them interpretable by algorithms. Well-known filters are “blob-detection” and “background subtraction”. Simply changing a colour image to greyscale is also an important function within computer vision.

Figure 11 – A greyscale filter shows the algorithmic view of the same landscape and computational data.

Layering these filters onto input images helps to understand the difference between how humans see the world and interpret it and how an algorithm is programmed to see the world and interpret it differently. Reading the code makes it possible to understand the pixel logic at play in the production of a filter, in which each pixel in an image computes its values based on the pixel values around it, producing various matrices that filter information in the image. The well-known “cellular automata” algorithm applies a similar logic; a “Langton’s ant” uses a comparable logic.

A series of simulations were created using a satellite image of a site in the Amazon called the Meeting of Waters, which is the confluence of two rivers, the dark-coloured Rio Negro and the sandy-coloured Amazon River. Each river has different speeds, temperatures and sediments, so the two rivers do not merge but flow alongside each other in the same channel, visibly demarcated by their different colours.

The simulations were created by writing a new set of rules, or pixel logics, to compute the image, which had the effect of “repatterning” it. Analogously, this also appeared to “terraform” the river landscape into a new composition. The simulations switch between the image that the algorithm “sees”, including the information it uses to compute and filter the image, and the image that we see as humans, including the cultural, social and environmental information we use to make sense of it. The visualisation tries to explore the notion of machine vision as a “hyperimage”, an image that is made up of different layers of images that each analyse patterns and relationships between pixels.

Automata II is a series of simulations that continue the research of machine vision techniques established in Automata I. This iteration looks further into how matrices and image analysis combine to support surveillance systems operating on video images. By applying similar pixel rule sets to those used in Automata I, the visualisation shows how the algorithm can detect motion in a video, separating figures in the foreground from the background, leading to surveillance.

Figure 12 – Using the OpenCV code library to detect motion, a function in surveillance systems. Using a video of a chameleon, the analysis is based on similar pixel operations to Automata I.

In another visualisation, a video of a chameleon works analogously to explore how the socio-political function of surveillance emerges from the mathematical abstraction of pixel operations. Chameleons are well-known for their ability to camouflage themselves by blending into their environment (and in many cultures are associated with wisdom). Here the algorithm is programmed to print the pixels when it detects movement in the video and remain black when there is no movement. In the visualisation, the chameleon appears to reveal itself to the surveillance of the algorithm through its motion and camouflage itself from the algorithm through its stillness. An aesthetic contrast is created between an ancient animal captured by innovative technology; however, the chameleon resists the algorithm’s logic to separate background from foreground through its simple embodiment of stillness.

Figure 13. The algorithm was reconfigured to only reveal the pixel operations’ understanding of movement. The chameleon disguises or reveals itself to the surveillance algorithm through its motion.

The work explores the coded gaze of a surveillance camera and how machine vision is situated in society, politically and apolitically, in relation to the peculiarly abstract pixel logics that drive it. Here, visualisation is a reverse-engineering of that coded gaze in order to politically situate source code and code libraries for social and cultural interpretation.

Final Thoughts

Applying critical theory to data practices, including data-driven design and data visualisation, provides a way to interrupt the adherence to the neutral-objective narrative. It offers a way to circulate data practices more justly back into the social, political, ethical, economic, legal and philosophical domains from which they have always derived. The visual techniques presented here, and the ideas about what form a critical data visualisation practice could take, were neither developed in tandem nor sequentially, but by weaving in and out of project developments, exhibition presentations, and writing opportunities over time. Thus, they are not offered as seamless examples but as entry points and options for taking a critical approach to working with data in design. The proposition of situatedness as a territorial, social, and political quality that emerges from decolonial and feminist epistemologies is one pathway in this work. The field of critical data studies, whilst still incipient, is developing a rich discourse that is opportune and constructive for designers, although not immediately associated with visual practice. Situatedness as a critical data visualisation practice has the potential to further engage the forms of technological development interesting to designers with the ethical debates and mobilisations in society today.

References

[1] L. Gitelman, “Raw Data” is an Oxymoron (Cambridge, MA: MIT Press, 2013).

[2] d. boyd and K. Crawford, “Critical Questions for Big Data: provocations for a cultural, technological, and scholarly phenomenon”, Information, Communication & Society 15 5 (2012), 662–79.

[3] R. Kitchen, The Data Revolution: big data, open data, data infrastructures & their consequences (Los Angeles, CA: Sage, 2014).

[4] A. Iliadis and F. Russo, “Critical Data Studies: an introduction”, Big Data & Society 3 2 (2016).

[5] Y. A. Loukissas, All Data are Local: thinking critically in a data-driven world (Cambridge, MA: MIT Press, 2019), 3.

[6] Ibid, 23.

[7] Ibid, 2.

[8] Ibid, 10.

[9] Ibid, 10.

[10] D. Haraway, “Situated Knowledges: the science question in feminism and the privilege of partial perspective”, Feminist Studies 14 3 (1988), 575–99.

[11] S. Harding, “‘Strong objectivity’: A response to the new objectivity question”, Synthese 104 (1995), 331–349.

[12] P. H. Collins, Black Feminist Thought: consciousness and the politics of empowerment (London, UK: HarperCollins, 1990).

[13] C. D’Ignazio and L. F. Klein, Data Feminism (Cambridge, MA: MIT Press, 2020),152.

[14] Ibid, 162.

[15] N. P. Snyder and L. L. Kammer, “Dynamic adjustments in channel width in response to a forced diversion: Gower Gulch, Death Valley National Park, California”, Geology 36 2 (2008), 187–190.

[16] D. Haraway, “Situated Knowledges: the science question in feminism and the privilege of partial perspective”, Feminist Studies 14 3 (1988), 575–99.

Suggest a Tag for this Article
[4] Infrastructure for subsurface ecologies. Alex Duff, University of Technology Sydney, Master of Landscape Architecture Thesis, 2021 (supervisor: Dr Andrew Toland).
[4] Infrastructure for subsurface ecologies. Alex Duff, University of Technology Sydney, Master of Landscape Architecture Thesis, 2021 (supervisor: Dr Andrew Toland).
Governing the Ground: Architecture v. the Rights of the Land 
Biological Diversity, Governing, land ownership, land rights, Rights of nature, Sustainable Development
Andrew Toland

andrew.toland@uts.edu.au
Add to Issue
Read Article: 4382 Words

Until recently, nature was wholly outside the law.[1] At most, it was property of one sort or another – to be bought and sold, securitised and commodified, and especially, in the old-fashioned phrase of the English common law, “improved”. Other “laws” – of physics, chemistry and biology – are not of consequence in this realm of capital “L” Law,[2] exempted because of their exceptionalism. Humans are distinct from and superior to other animals, a situation the Canadian environmental lawyer and academic David R. Boyd describes as “at odds with reality … any biologist will tell you that humans are animals”.[3] Black’s Law Dictionary, the dominant legal lexicon in North America, is at pains to point out that the legal definition of animals “includes all living creatures not human”.[4] Similarly, architecture presented itself as standing apart from nature. “Architecture, unlike the other arts, does not find its patterns in nature”, claimed Gottfried Semper in 1834.[5] Or Louis Kahn in 1969: “What man makes, nature cannot make.”[6] In what is ultimately a form of the cosmology of the modern, law and architecture sit apart from and superior to nature. Design, like the economic activities to which law gives its support, is about subduing nature and turning it to productive ends. In this model, both are methods of human governance of the natural world. Indeed, for centuries, architecture was among the key pieces of evidence cited for human exceptionalism – buildings and cities, just as in Laugier’s original parable of the hut as the first example of architecture, allowed humans to transcend the state of nature.[7, 8] At times, this line of Western thought had deeply pernicious consequences for other peoples throughout the world, as the presence or absence of architecture, as well as agricultural cultivation, became one of the key legal determinants that permitted European colonisers to expropriate the lands of indigenous peoples.[9] Architecture was thus enfolded into the law’s methods for imposing governance over unfamiliar lands and peoples, just as it structured the dominance over nature. But what would it mean, for architecture no less than for the law, if – as one of the provocations suggested by the editors of this journal proposes – nature were to govern itself? Developments in legal theory over the past several decades, as well as a handful of legal cases that have received wide media coverage, now allow us to consider this novel possibility. This article considers the rise of this “rights of nature” jurisprudence from the perspective of architecture and landscape architecture, with particular attention paid to the emergence of the (literal) law of “the land”, as well as what this emerging way of thinking about the natural world and its life and systems might mean for the design of the very ground itself. 

Media reporting on high profile lawsuits or settlements where legal standing has been claimed (and in some cases recognised) for landscapes, ecosystems and rivers, to enable them to sue as plaintiffs, has drawn attention to the rights of nature and related claims as strategies to protect ecosystems or seek accountability for environmental damage and destruction. This has involved instances as diverse as the Whanganui River in New Zealand,[10] the Ganges and Yamuna Rivers and the Gangotri and Yamunotri glaciers in India,[11, 12] the Colorado River in the United States,[13] the Amazon rainforest in Colombia,[14] and the Paraná Delta wetlands in Argentina.[15] In addition, by the start of 2021, 178 legal provisions derived from rights of nature legal theory had been documented in seventeen countries across five continents, with an additional thirty-seven under consideration in ten more countries. Rights of nature has also found expression in a range of international legal instruments, such as the United Nations’ 2030 Agenda for Sustainable Development, the Convention on Biological Diversity, and in the jurisprudence of the Inter-American Court of Human Rights.[16] These approaches have their origins in the relatively recent fields of “earth jurisprudence” and “wild law”.[17] Many of their arguments derive from the disjunction that has emerged between the law and advances in the ecological sciences; a critique of legal doctrines trapped in the discrete and mechanistic model of the natural work developed during the scientific revolution of the sixteenth and seventeenth centuries when these foundational areas of the law were also fundamentally consolidated.[18] In contrast, earth jurisprudence and wild law seek to orient the law towards a scientific model of the world as made up of dynamic organic and material interrelationships, and away from anthropocentrism, subordination of the environment in the form of “property”, and economic notions of ever-expanding “growth”.[19] 

Figure 1 – Elements of the subterranean biome. Alex Duff, University of Technology Sydney, Master of Landscape Architecture Thesis, 2021 (supervisor: Dr Andrew Toland). 

Beyond this, the legal presumptions that give rise to the longstanding juridical status of nature also provide the basic conceptual structure within which the basic actions of modernity, including design, occurred. The basic systems of procurement of architecture, landscape architecture, and urban and landscape planning and design all fundamentally depend on the system of property; on who has legal control or dominion over land, and the right to “exploit” its resources (a much more neutral term in legal parlance, but one which, nonetheless, opens the door for acts with much more negative and damaging consequences). Whether issued by individuals, corporations or the state, any design commission granted to an architect or landscape architect requires the commissioner to have the right to “improve” (again, in the sense of the archaic language of the law) the land in the first place. Before embarking on a further consideration of what the rights of nature might mean for design disciplines concerned with built and natural environments, it is worth examining in some detail how the very legal conceptualisation of the ground itself also involved the basic activities of architecture and landscape design. 

From the sixteenth century onwards, in English common law, one of the fundamental precepts governing land (and who had the right to do what; on, under, and above the ground) was encapsulated in the Latin legal dictum, Cuius est solum, eius est usque ad colem et ad infernos: “Whoever’s is the soil, it is theirs all the way to Heaven and all the way to Hell.”[20] The earliest recorded judicial authority for this approach has its origins in a basic architectural dispute. Sometime around 1586, an English landowner somewhere in Oxfordshire constructed a house blocking the light and views his neighbour had enjoyed for some three to four decades. The neighbour sued. The record of the judgment in that lawsuit, Bury v Pope, is a scant 123 words long and can be quoted in full:  

“Case for stopping of his light.-It was agreed by all the justices, that if two men be owners of two parcels of land adjoining, and one of them doth build a house upon his land, and makes windows and lights looking into the other’s lands, and this house and the lights have continued by the space of thirty or forty years, yet the other may upon his own land and soil lawfully erect an house or other thing against the said lights and windows, and the other can have no action ; for it was his folly to build his house so near to the other’s land: and it was adjudged accordingly. 

Nota. Cujus est solum, ejus est summitas usque ad cœlum.”[21]

The final nine words echo down the centuries, certainly in the areas of the world touched by English common law, from mineral rights in Native American lands to mining leases in postcolonial Africa to tricky jurisdictional questions over carbon capture and storage. The careful reader will note that “et ad infernos” (“and to hell/the underworld”) does not appear in the original Latin maxim at the end of the report of the original judgment. And yet by the eighteenth and nineteenth centuries, the common law doctrine, which has variously been claimed to have its origins in Roman or Jewish Law, had come to be accepted as applying to rights both above and below an owner’s land. It is no coincidence that by this time claims and rights related to the extraction of mineral resources were of huge economic importance. In English common law, the parameters of land and land ownership, as originally conceived, emerged as spatially absolute – it could not conceive of more intricate frameworks of interests or custodianship in which different parties or, indeed, different beings might share in the rights and responsibilities for the use and care of a given territory.  

Figure 2 – Surface/subsurface reciprocities. Alex Duff, University of Technology Sydney, Master of Landscape Architecture Thesis, 2021 (supervisor: Dr Andrew Toland). 

A few decades later, this fundamental principal of the law of Land (Terra, as presented in its Latin formulation), was elaborated in telling detail by the great systematiser of early modern jurisprudence, the Elizabethan jurist Sir Edward Coke. Again, it is worth scrutinising how Coke first presented this legal approach to the land; in essence, it depends on a set of presumptions of human habitation within the material environment that we can also see establishing the modern foundations of designing, dwelling and designing the land in which that dwelling occurs (with land that can be built upon being accorded a special privilege): 

“Terra, in the legal signification comprehended any ground, soil, or earth whatsoever; as meadows, pastures, wood, moores, waters, marshes, furses and heath. Terra est Nomen generalissimum, et comprehendit omnes species terra; but properly terra dicitur a terendo, quia vomere teritur; and anciently it was written with a single r; and in that sense it includeth whatsoever may be plowed; and is all one with arvum ab arando. It legally includeth also all castles, houses, and other buildings: for castles, houses, &c. consist upon two things, viz. land or ground, as the foundation or structure therewith, so that in passing the land or ground, the structure or building thereupon passeth therewith. Land is anciently called Fleth; but land builded on is more worthy than other land, because it is for the habitation of man, and in that respect hath the precedency to be demanded in the first place in a Præcipe, as hereafter shall be said.”[22] 

It is habitation that conveys rights; that is the source of law and governance over land and the expropriation of its material resources: 

“And therefore this element of earth is preferred before the other elements: first and principally, because it is for the habitation and resting-place of man; for man cannot rest in any of the other elements, neither in the water, are, or fire. For as the heavens are the habitation of Almightie God, so the earth hath he appointed as the suburbs of heaven to be the habitation of man; Cœlum cœli domino, terram autum dedit filiis hominum. All the whole heavens are the Lord’s, the earth hath he given to the children of men. Besides, every thing, as it serveth more immediately or more meerly for the food and use of man (as it shall be said hereafter), hath the precedent dignity before any other. And this doth the earth, for out of the earth cometh man’s food, and bread that strengthens man’s heart, confirmat cor hominis, and wine that gladdeth the heart of man, and oyle that makes him a cheerful countenance; and therefore terra olim Ops mater dicta est, quia omnia hac opus habent ad vivendum. And the Divine agreeth herewith for he saith, Patrium tibi & nutricem, & matrem, & mensam, & domum posuit rerram Deus sed & sepulchre tibi hanc eandem dedir. Also, the waters that yeeld fish for the food and sustenance of man and are not by that name demandable in a Præcipe.”[23] 

The ownership of control of the surface of the land is then expanded into a fully three-dimensional envelope of property, governance and control: 

“… but the land whereupon the water floweth or standeth is demandable (as for example) viginti acr’ terræ aqua coopert’, and besides, for the earth doth furnish man with many other necessaries for his use, as it is replenished with hidden treasures; namely gold, silver, brasse, iron, tynne, leade, and other metals, and also with a great variety of precious stones, and many other things for profit, ornament, and pleasure. And lastly, the earth hath in law a great extent upwards, not only of water, as hath been said, but of ayre and all other things even up to the heaven; for cujus est solum ejus est usque ad coelum, as it is holden.”[24] 

Although the subsurface is not explicitly mentioned in the Latin dictum, it has always been the presumption that the rights of land extend down as well as upwards, which is made plain by Coke’s express discussion of mining (an increasingly important economic activity in both Elizabethan and Jacobean England) and the expanding global conquests of the European empires. 

Figure 3 – Sydney basin soil sampling. Alex Duff, University of Technology Sydney, Master of Landscape Architecture Thesis, 2021 (supervisor: Dr Andrew Toland). 

Less than a century later, the importance of subsuming any disorderly expressions of nature on landed property – a theory of landscape design that had been developing across the course of the seventeenth century – was famously crystallised in Joseph Addison’s influential essay on the landscape garden, “On the Pleasures of the Imagination”;[25] property and design fused in his dictum: “a Man might make a pretty Landskip of his own Possessions.”[26] Over subsequent centuries, and especially in the context of European colonialism, it became almost an imperative that land be improved by “art” in order to justify its expropriation and its incorporation into a totalising world economic system.[27] As Sir William Blackstone, Coke heir’s as juridical systems builder and the most influential legal systematiser from the end of the eighteenth century onwards, wrote: “The Earth, and all things herein, are the general property of mankind, exclusive of other beings, from the immediate gift of the creator.”[28] 

Blackstone himself was a great architectural enthusiast and, indeed, an architectural critic and draftsperson, author of An Abridgment of Architecture (1743) and Elements of Architecture (1746-7).[29] In classical architecture, Blackstone saw the highest expression of a system of universal laws that surpassed the disorderliness of the natural world. Here, his model was the science of mathematics, not the natural sciences; it was the former that gave architecture access to a plane of being beyond the worldly, the realm of Beauty and Nobility, “the flower and crown of all sciences mathematical”. Classical architecture provided Blackstone with his model for his efforts to renovate and remodel English common law, to rescue it from its fate, “like other venerable edifices of antiquity, which rash and unexperienced workmen have ventured to new-dress and refine, with all the rage of modern improvement … it’s [sic] symmetry … destroyed, it’s proportions distorted, and it’s majestic simplicity exchanged for specious embellishments and fantastic novelties”.[30] Just as the architect must work to restore symmetry, proportion, and majestic simplicity to a grand manor fallen into decay, “mankind [sic]” was duty-bound to elevate “his [sic]” property of the entire earth through the improvements of art and science. Blackstone’s distaste for “modern improvement” did not preclude him from writing elsewhere of the inherited law as “an old Gothic castle” that needed to be “but fitted up for a modern inhabitant … converted into rooms of convenience, … chearful [sic] and commodious”.[31] 

Figure 4 – Infrastructure for subsurface ecologies. Alex Duff, University of Technology Sydney, Master of Landscape Architecture Thesis, 2021 (supervisor: Dr Andrew Toland). 

The totalising thrust of Western property law as a law of land has resulted in more recent efforts by designers focused on the environment and ecology, unlike the theorists of earth jurisprudence and wild law, to find spaces outside of the law itself, rather than to attempt to find space within it. The landscape architect Gilles Clément has deliberately sought out land literally outside the jurisdiction and operations of the law and its various systems of governance and administration. His notion of le tiers paysage is about land:  

“… forgotten by the cartographer, neglected by politics, undefined spaces, devoid of function that are difficult to name; an ensemble … located on the margins. On the edge of the woods, along the roads and rivers, in the forgotten corners of the culture, in the places where machines do not go. It covers areas of modest size, scattered like the lost corners of a field; unitary and vast like peat bogs, moors and wastelands resulting from recent abandonment. 

There is no similarity of form between these fragments of landscape. They have only one thing in common: they all provide a refuge for diversity. Everywhere else, diversity is driven out. 

This justifies bringing them together under a single term. I propose ‘Third Landscape’ …”[32] 

The passage is striking, especially when we compare it to Coke, whose aim was to bring those very landscapes – “meadows, pastures, wood, moores, water, marshes, furses and heath” – within the remit of the law. For Clément, it is the very fact that the latter types of landscape, especially, have been so difficult to govern, to bring within law’s jurisdictional ambit, that makes them such rich sources of biodiversity – nature’s outlaw territories. It is these territories that ought to provide a model for designers (and his preferred model for the designer in question is not the architect or landscape architect, but the gardener, who “creates a landscape by following it over time, using horticultural and environmental maintenance techniques. … But above all, it is about life”).[33] 

Figure 5 – Infrastructure for subterranean biodiversity. Alex Duff, University of Technology Sydney, Master of Landscape Architecture Thesis, 2021 (supervisor: Dr Andrew Toland). 

But if nature itself has rights, if it is recognised as having agency and self-determination in the manner put forward by the earth jurisprudence and wild law movements, then designers may not need to – and, increasingly, cannot – escape into a third landscape. As other theorists have pointed out, nature is always part of the social. Beyond the well-known position of Bruno Latour in We Have Never Been Modern, other theorists have noted the ways in which “the entities that compose arrangements have a physiochemical composition and are, accordingly, part of the greater physiochemical stratum in which material entities are linked”.[34] In other words, society and culture have a “physicality”, and a large part of that physicality is defined by the bio- and physiochemical processes of “nature”. In this sense, even anthropogenic climate change is a kind of revenge of nature, whose processes have turned against us. In a more everyday sense, “The properties of wood, for instance, lay down sequences of actions that must be followed if trees are to be felled, axe handles produced, animals clubbed, houses built, and paper produced”.[35] 

There is no escaping our material realities and the dynamics they define. The question is how to enter into and think of ways to reconfigure those “sequences of actions” – in other words, how to design. Material properties are not absolutely deterministic. It is not just a matter of asking the brick, à la Louis Kahn.[36] Instead, the design possibilities that come from the rights of nature simply begin to open up the field for a set of political claims about the appropriate status and interrelationship between humans, societies and the non-human environment, by codifying those claims in a form that other models of organising human activities are forced to recognise. As in debates over the political, social, economic and cultural rights of humans, the language of rights is simply part of an ongoing political contestation over claims and obligations.[37] We might begin, for example, by using the very same premises as Coke, considering what design might mean in the realm of terra itself – “ground, soil, or earth whatsoever” – if that very ground also had self-determining rights, and could govern itself, irrespective of what our “designs” upon it might be. A recent piece in Nature Climate Change draws attention to the extent to which subterranean ecosystems have generally been overlooked in biodiversity and climate change mitigation agendas.[38] This zone, “likely the most widespread non-marine environment on Earth,” remains largely a terra incognita. In cities, the upper layers of the urban soil (the “A and B horizons”) are highly “disturbed” and often “depaupurated”, if not directly contaminated with anthropogenic chemicals and other wastes.[39] Various projects have drawn attention to the task of recovering urban and other post-anthropogenic soils.[40] But an equally important shift may simply be in opening up the legal definition of “land” and the cluster of rights and obligations that have been constructed around it. Instead of a conceptual tabula rasa simply to be built upon, if we instead came to recognise it as the lively subterranean biome it in fact is, and if that biome might be recognised as having rights and claims of its own, then design might be forced to take a very different turn. Even the most vacant of plots will come to seem not so vacant, after all. 

References 

[1] Admittedly, this assertion is phrased in a universalist register. The reality is that what is being referred to is Western, and, latterly, international, legal constructs, that have provided the dominant model for legal thinking across almost all jurisdictions that form the basis for land law in the early twenty-first century. 

[2] C. Kauffman and P. Martin, The Politics of Rights of Nature: Strategies for Building a More Sustainable Future (Cambridge, MA: The MIT Press, 2021), 4. 

[3] D. Boyd, The Rights of Nature: A Legal Revolution That Could Save the World, (Toronto: ECW Press, 2017), xxv. 

[4] Ibid, xxv. 

[5] Quoted in A. Forty, Words and Buildings: A Vocabulary of Modern Architecture (London: Thames & Hudson, 2000), 220. 

[6] Ibid, 220. 

[7] O. Verkaaik, “Creativity and Controversy in a New Anthropology of Buildings”, Ethnography 17(1) (2015), 135–143. Recent work in anthropology has explicitly challenged this premise, as in the work of Tim Ingold discussed by Verkaaik: T. Ingold, “Building, Dwelling, Living: How Animals and People Make Themselves at Home in the World”, 172–188. In Tim Ingold, ed., The Perception of the Environment: Essays on Livelihood, Dwelling and Skill (London: Routledge, 2000). 

[8] M. Laugier, An Essay on Architecture, trans. Wolfgang Herrmann and Anni Herrmann (Los Angeles: Hennessey & Ingalls, 1977). 

[9] S. Banner, “Why Terra Nullius? Anthropology and Property Law in Early Australia”, Law and History Review, 23(1) (2005), 95–132 at 107. 

[10] Te Awa Tupua (Whanganui River Claims Settlement) Act 2017 (NZ). 

[11] Mohd Salim v State of Uttarakhand & others, WPPIL 126/2014 (High Court of Uttarakhand), 2017. 

[12] Lalit Miglani v State of Uttarakhand & others, WPPIL 140/2015 (High Court of Uttarakhand), 2017. 

[13] Colorado River Ecosystem v State of Colorado, 1:17-cv-02316 (U.S. Colorado Federal Court), 2017. 

[14] Demanda Generaciones Futuras v Minambiente, STC4360-2018 (Supreme Court of Colombia), 2018. 

[15] Asociación Civil por la Justicia Ambiental v. Province of Entre Ríos, et al., (Supreme Court of Argentina), 2020. 

[16] C. Kauffman and P. Martin, The Politics of Rights of Nature: Strategies for Building a More Sustainable Future (Cambridge, MA: The MIT Press, 2021), 2. 

[17] As represented, especially, in the work of T. Berry, “Rights of Earth: We Need a New Legal Framework Which Recognises the Rights of All Living Beings,” 227–229. P. Burdon, ed., Exploring Wild Law: The Philosophy of Earth Jurisprudence (Kent Town, South Australia: Wakefield Press, 2011); C. Cullinan, Wild Law: A Manifesto for Earth Justice, 2nd ed. (Totnes, UK: Green Press, 2011); and P. Burdon, Earth Jurisprudence: Private Property and the Environment (London: Routledge, 2014). 

[18] C. Kauffman and P. Martin, The Politics of Rights of Nature: Strategies for Building a More Sustainable Future (Cambridge, MA: The MIT Press, 2021), 4–5. 

[19] D. Boyd, The Rights of Nature: A Legal Revolution That Could Save the World, (Toronto: ECW Press, 2017), xxii–xxiii. 

[20] Jackson Municipal Airport Authority v. Evans, 191 So. 2d 126, 128 (Miss. 1966). 

[21] Bury v Pope (1586) Cro Eliz 118; 78 ER 375. 

[22] Coke on Littleton (1628–1644), 4a. 

[23] Ibid. 

[24] Ibid. 

[25] J. Addison, Spectator, III, Nos 411–421 (21 June–3 July 1712), 535. 

[26] Ibid. 

[27] For example, the first landscape designer in Australia, Thomas Shepherd, advocated for the use of English “landscape gardening” principles to be used to improve Crown land in order to attract foreign capital investment: see T. Shepherd, Lectures on Landscape Gardening in Australia (Sydney: William M’Garvie, 1836). 

[28] W. Blackstone, Commentaries on the Laws of England in Four Books, Book III (Philadelphia: J.B. Lippincott Company, 1893; orig pub 1765), 2. 

[29] C. Matthews, “Architecture and Polite Culture in Eighteenth-Century England: Blackstone’s Architectural Manuscripts” (unpublished dissertation, School of History and Politics, University of Adelaide, 2007); W. Prest, “Blackstone as Architect: Constructing the Commentaries,” Yale Journal of Law & the Humanities, 15(1) (2003), 103–133. 

[30] W. Blackstone, Commentaries on the Laws of England in Four Books, Book I (Philadelphia: J.B. Lippincott Company, 1893; orig pub 1765), 8. 

[31] Ibid, Book III, 268. 

[32] G. Clément, Manifeste du tiers paysage (Paris: Éditions du commun, 2016), 14. 

[33] G. Clément, Gardens, Landscape and Nature’s Genius, trans Elzélina Van Melle (Risskov, Denmark: IKAROS Press, 2020), 19–20. 

[34] T. Schatzki, “Nature and Technology in History,” History and Theory 42(4) (2003), 88–89. 

[35] Ibid, 89. 

[36] Quoted in S. Turkle, Simulation and its Discontents (Cambridge, MA: The MIT Press, 2009), 86 n 4. 

[37] Marie-Bénédicte Dembour, “Human Rights Talk and Anthropological Ambivalence: The Particular Contexts of Universal Claims,” 17–32. Olivia Harris, ed., Inside and Outside the Law: Anthropological Studies of Authority and Ambiguity (London: Routledge, 1996). 

[38] D. Sánchez-Fernández, D. Galassi, J. Wynne, P. Cardoso and S. Mammola, “Don’t Forget Subterranean Ecosystems in Climate Change Agendas,” Nature Climate Change 11 (2021), 458–459. 

[39] R. Forman, Urban Ecology: Science of Cities (Cambridge, UK: Cambridge University Press, 2014), 91–93. 

[40] See, for example, the projects of the landscape architect Julie Bargmann and her D.I.R.T. studio. 

Suggest a Tag for this Article
Figure 1 – Landscapes of Exploitation, Kibali gold mines, Democratic Republic of the Congo.
Figure 1 – Landscapes of Exploitation, Kibali gold mines, Democratic Republic of the Congo.
MIGRATING LANDSCAPES 
ALGORITHMIC VISION, MEDIA ECOLOGIES, MIGRATING LANDSCAPES, REPRESENTATION, TOKENISATION
Tanya Mangion, Michiel Helbig, Corneel Cannaerts

tanyamangion95@gmail.com
Add to Issue
Read Article: 3096 Words

MEDIA ECOLOGIES 

Our collective consciousness of climate change is an accomplishment of the vast apparatus of computational technologies for capturing, processing and visualising increasing amounts of data produced by earth observation technologies, satellite imaging, and remote sensing. These technologies establish novel ways of sensing and understanding our world, extending human visual cultures in scale, time and spectral capacities. The gathered data is synthesised in increasingly complex models and simulations that afford highly dynamic visualisations of climate events as they unfold and envision near future scenarios. The images resulting from this technical vision and cognition render the artificial abstraction comprehensible and are essential in developing the notion of climate change and attempts to mitigate its effects.[1]  

The artificial abstraction introduced through this planetary apparatus is reflected in the naming of the Anthropocene, as the contemporary geological epoch, prompted by humanity’s lasting impact on our planet.[2] The naming has been criticised for its anthropocentrism, i.e. putting the human once again at the centre, and for depoliticising and de-territorialising climate change, casting the whole of humanity as equally responsible for environmental crises, disregarding substantial regional and societal differences. Several alternatives have been formulated in critique of the term: Capitalocene,[3] highlighting the devastating role of capitalism in climate change, or Plantationocene,[4] stressing the ongoing inequalities resulting from colonialism and slave labour. While acknowledging these terms, Donna Haraway proposes the term Chthulucene, introducing multispecies stories and practices, mythologies, and tentacular narratives to avoid anthropocentrism and reductionism, providing room for more than human agency.[5] 

The framing of climate crises within human-centred, depoliticised, technocratic discourse is also strongly critiqued from cultural practices in the arts, design and media.[6] The top-down, analytical point of view afforded through scientific observation, visualisation and prediction is increasingly being complemented by documentary, eyewitness and on-the-ground reports of the impact of climate change. Images captured through the plethora of cell phone and other cameras, data logging, image sharing and social media produce a constantly updating stream of images and data on climate change. Digital media ecologies, the assemblages of hardware, software and content of digital media within our environment, play an important role in addressing climate change.[7] Whether it is through the repurposing of the scientific apparatus and technologies for observation and visualisation or the ubiquitous use of personal devices and social media, computational images have become significant cultural media artefacts that can be used to develop more narrative and fictional imaginaries of environmental crises. 

Landscapes are defined as both natural and human-made environments, as well as their depiction in media such as painting, photography and film. Even as environments, landscapes are a physical and multi-sensory medium in which cultural meanings and values are encoded. Landscapes operate through the visual; i.e. a landscape is what can be seen from a certain vantage point, and implies an active spectator. As a verb, landscaping indicates acting on the environment, through manipulating its material features, erasing or adding elements. Both as environment and as media, landscapes are inextricably entangled with capital and power, whether exploited through extracting resources, consumed as an experience through tourism and real estate, or mediated and commodified as an artefact. In Landscape and Power, Mitchell indicates a landscape as a medium; an area of land is only considered a landscape from the moment one perceives it to be as such, through attached meanings, as artificial-cultural, political and social constructs.[8] The recent climate crises and the emergence of digital media ecologies require us to rethink this implicit human-centred notion of landscape and extend it to include non-human, animal and machine agencies.[9] As such, landscapes are an interesting lens through which to look at the blurring between the natural and the cultural, human and non-human agency, and the mediated and bodily experiences of environments.  

Figure 1 – Landscapes of Exploitation, Kibali gold mines, Democratic Republic of the Congo. 

MIGRATING LANDSCAPES 

The dissertation project “Migrating Landscapes” by Tanya Mangion is framed within the ideas outlined above, it explores landscapes as both environment and media, inextricably entangled with capital and power.[10] The project speculates on landscapes gaining agency through a decentralised autonomous organisation (DAO),[11] that can interact on behalf of the landscape with human agencies – individuals, governments, legal entities, financial systems… Once established, the DAO operates on the blockchain and can operate without human interference as regulated through smart contracts. Governance of the DAO is regulated through tokens, which fractionalise stewardship, but cannot act against the interest of the landscape as encoded by the DAO. 

This speculative scenario questions what role architecture could play when engaged by a DAO that represents the interests of exploited landscapes. How do architects design for this non-human agency? What strategies could architects develop to engage landscapes beyond the habitual ways of looking at them as resources to be excavated, sites to be developed? What novel languages, tools and protocols would architects need to develop in order to take up this role? Rather than attempting to find definite answers to these questions, they instead form the drivers for developing a speculative design project.  

The architectural toolbox seems ill-equipped to deal with the large timeframes and scales that migrating landscapes operate on. In order to begin to address these questions we might extend the architectural toolbox with technologies such as earth observation, satellite imagery, data mining, sensor arrays… The role of the architect could be to repurpose the high-tech apparatus and data from scientific observations of climate change, and turn them into speculative design narratives and imaginaries on migrating landscapes. Using media ecology and algorithmic vision the project highlights issues and landscapes that deserve attention, and launches a call to architects who wish to engage with it. Data collection from available data sets including time-based, satellite, terrain and eyewitness data could be used to rebuild a cohesive image of exploited landscapes, using narrative media combined with conventional architectural processes. Injecting the image of the landscapes back into media ecology would generate a feedback loop that would go on to bring about changes in human behaviour in regard to the landscape both as media and environment, the latter occurring over a longer time frame. 

The speculative design project explores this potential through different aspects: starting with the use of algorithmic vision to analyse landscapes, then giving an overview of the various phases of the development of a DAO, exploring a tokenisation shift from a fungible to a non-fungible valuation of landscapes, representation of landscapes in media ecology and demonstrating how architecture could be used to engage an audience. 

ALGORITHMIC VISION 

Computational visual tools allow architects novel ways of understanding, mapping and visualising landscapes. The combination of multiple data sets provides a more densely mediated version of a landscape. Satellites can pick up the image of a landscape and, when combined with terrain data, mapping platforms provide a data-rich and layered representation of the landscape. While mapping services, like Google maps or GIS, are presented as neutral media, they are entangled with commercial, military and political interests,[12] not only in the technologies used for capturing data but also in its visualisation – as is demonstrated by the absence of data for certain territories, differences in resolution, or the deliberate blurring of specific sites.[13] 

Satellite imagery is not limited to capturing bands of the spectrum visible to human eyes; by combining several bands they can provide insights into vegetation, elevation, refraction, moisture, temperature… The resulting multi-band images can be considered synthetic artificial artefacts as they are assembled by algorithms. They remain largely invisible to humans, and are reduced to mediating information and data flows, as they “do not represent an object, but rather are part of an operation”.[14] Depending on the capturing sensor, information is sampled at discrete intervals, introducing resolution ranging from a hundred metres to fifteen centimetres. Depending on the number of satellites and their operation, the images have a certain refresh rate, giving us the ability to visit time progressions within the landscapes. These freeze-framed images of landscapes provide us with information or proof of interventions that occurred within the territory over time.[15] 

Figure 2 – Satellite bands from Sentinel Application Platform (SNAP), B8, infrared, natural colour. 

The landscapes in the project were the result of human-centric actions like resource extraction, as demonstrated at one of the largest gold mines in the Democratic Republic of the Congo. In addition to satellite images, a virtual field trip of sorts allowed a journey through the data-sphere of the landscapes concerned. This led to extraction performed on different levels; data extraction from photo-sharing platforms was used to investigate the image of the landscapes within the limitations of its geolocation. Another data extraction was performed to explore the fungible asset within the landscape, resulting in a plethora of data, exploring the appropriation of the asset within our culture. Through a process of data scraping, deduction and fragmentation, a series of reconstructions of landscapes were produced during the project. These reconstructed landscapes link material flows from extraction to consumption – of, for instance, gold – and are published again through social media in an attempt to reveal the material sources of familiar consumer objects.[16] Gold was a remarkable mineral to start off with due to its use as a federal resource, keeping economies stable by functioning as a hedge against inflation, as well as its significance in history and popular culture.[17] 

Figure 3 – Zoomable map of the Kibali gold mines, Democratic Republic of the Congo (press space to change layers).

TOKENISATION 

When excavating landscapes for minerals, they are valued for their interchangeable or fungible material properties, for example the amount of gold they contain. Once extracted, each gram of gold is valued the same, regardless of where on the planet it has been mined. Whereas if one goes for a hike, for instance, or looks at landscape painting or photography, specific features of the landscapes, slopes or mountain peaks provide unique experiences; i.e., they are not interchangeable, they are non-fungible. In both these scenarios, the fungible exploitation of landscapes for resource extraction and the non-fungible experience of landscapes, mediated or otherwise, the landscape is passive and does not have agency. 

Figure 4 – Tokenisation of the landscape though mesh triangulation. 

The project proposes tokenisation of the non-fungible aspects of the landscape, controlled by a DAO, allowing collective stewardship of the landscape. This is to be achieved through appropriating tools from earth observation to build a mesh representation of the landscape. Each triangle of the mesh represents a unique, non-fungible fractional token of the landscape – in contrast to a voxel representation, which could be seen as representing the fungible exploitation of the landscape. This data allows an understanding, on a large scale, of fluxes within the landscape, and detects changes unseen to the human eye. Additionally, this data also offers the possibility to autonomise landscapes as DAO systems and thereby give them agency. The DAO operates transparently & independently of human intervention, including that of its creators. Based on a collection of smart contracts running on blockchain technology, it has the ability to garner capital, with automation at its centre and humans at the edges to manage, protect and promote its agency.[18] 

Figure 5  – Voxelisation and triangulation representing fungible and non-fungible discretisation of the landscape.
Figure 5 – Voxelisation and triangulation representing fungible and non-fungible discretisation of the landscape.

REPRESENTATION 

There is a role for architects here, to become engaged to map and visualise the DAO’s non-fungible entities. The architect has the tools to change the representation of landscapes, raising awareness of environmental evolution, generating behavioural changes and, over a longer timescale, impacting the environment itself. However, representation alone is not enough to communicate the sheer scale of these landscapes; the project proposes to map the exploited landscapes on the scale of urban environments, and build interventions in the form of pavilions to raise awareness of the landscapes. This serves to communicate the scale of material displacement of exploited landscapes such as mines within urban environments; commonly being the final destination for material flows, creating conversation and the possibility of engagement between the DAO and the human, with the latter generally being distanced from the reality of material displacement. This act brings the idea of tokenised landscapes to large audiences and allows for human engagement and participation within the DAO as shareholders.  

Figure 6 – 1:1 Visual representation of a physical intervention of part of the Kibali Gold mines within the urban environment of Ghent, Belgium. 

The role of the architect engaged by the DAO is to map and visualise the landscape’s assets, fractionalising it using algorithmic visual tools, and using architectural representations that can be minted as non-fungible tokens. The presence of these tokens on social media and through interventions within physical public spaces in cities aims, in the short term, to raise awareness of the vast scale of these landscapes of exploitation, and to change behaviours and allow for engagement and participation within the DAO as token holders. In the long term, this will start to affect the physical conditions of these landscapes themselves, as they no longer rely on selling their fungible, non-renewable material assets. This could lead to rewilding and restoring of vegetation – and potentially to their being traded as carbon sinks.[19] 

Although token holders should be preserving the non-fungibility of the landscapes, returning to the argument that nature is ultimately defeated by its utility, the next step would be to remove the human from the system completely, merging the biosphere and technosphere. There is still a chance of a “51% attack”; meaning shareholders could agree to overturn an agreement within the smart contract. To prevent this, the system could opt for full autonomy, which it could achieve over a longer timescale. Garnering capital through non-fungible tokens – of its image – could also be a possibility, and would potentially affect and accelerate the timescale of the process.  

Figure 7  – Leveraging social media to share images of the tokenised landscape
Figure 7 – Leveraging social media to share images of the tokenised landscape.

DISCUSSION  

Migrating Landscapes can be viewed as a concept that traces material flows through the use of algorithmic technologies not typically used within architecture, to explore how landscapes, non-human agents, can become autonomous. In the case of this dissertation project, the framework of a DAO was used to transform landscapes as media into non-fungible tokens, allowing the landscapes to stop being exploited and gain agency. What other technologies or tools could architects use to create compelling visual narratives, to engage with audiences and enable autonomy to non-human agents? Within the context of media ecology and algorithmic vision this was one response; considering the plethora of devices and data-gathering techniques that already exist and are still being created, the likelihood of autonomy for non-humans is ever more likely. 

The project does not propose a techno-solutionist approach, where we can engineer ourselves out of wicked problems caused by climate change. Rather, it proposes to use these technologies for their compelling visual, imaginary and narrative qualities, to make migrating landscapes and their non-human agency more relatable. The DAO as a system ultimately acts as a driving force for landscapes to “migrate”, becoming new entities and modifying our relationships and attitudes towards them. The system is allowing for these otherwise unseen landscapes to both establish presence within our media ecologies and to become located within our consciousness in this contemporary age. The changes it would instil are yet to be discovered. 

Acknowledgement

This paper reflects on the dissertation project “Migrating Landscapes” by Tanya Mangion that was developed in response to the studio brief “Algorithmic Vision: Architecture and Media Ecologies” of Fieldstation Studio at KU Leuven Faculty of Architecture. The project speculates on landscapes gaining agency through a decentralised autonomous organisation that can interact on behalf of the landscape with human agencies. Through reappropriating technologies for algorithmic vision, landscapes could turn their unique features into non-fungible tokens, allowing them to stop being exploited and gain agency.

Fieldstationstudio.org | https://www.instagram.com/migrating.landscapes/ 

References 

[1] B. Bratton, The Terraforming (Moscow: Strelka, 2019), 19.

[2] P. Crutzen and E. Stoermer, “The ‘Anthropocene’”, Global Change Newsletter, International Geosphere-Biosphere Program Newsletter, no. 41 (May 2000), 17–18; Crutzen, “Geology of Mankind”, Nature 415 (2002), 23; J. Zalasiewicz et al., “Are We Now Living in the Anthropocene?” GSA (Geophysical Society of America) Today vol. 18, no. 2 (2008), 4–8. 

[3] The origin of this term is not entirely clear, but is discussed at length here: https://www.e-flux.com/journal/75/67125/tentacular-thinking-anthropocene-capitalocene-chthulucene.

[4] J. Davis, A. Moulton, L. Van Sant, B. Williams, “Anthropocene, Capitalocene, … Plantationocene?: A Manifesto for Ecological Justice in an Age of Global Crises” Geography Compass, Volume 13, Issue 5, 2019). 

[5] D. Haraway, “Tentacular Thinking: Anthropocene, Capitalocene, Chthulucene”, Eflux Journal, Issue 75, September 2016. 

[6] T. J. Demos, Against the Anthropocene: Visual Culture and Environment Today (MIT Press, 2017). 

[7] S. Taffel, Digital Media Ecologies: Entanglements of Content, Code and Hardware (Bloomsbury Academic, 2019). 

[8] W. T J. Mitchell, Landscapes and Power (Chicago: University of Chicago Press, 1994), 15. 

[9]  L. Young, Machine Landscapes: Architectures of the Post Anthropocene (London: Wiley). 

[10] See http://www.fieldstationstudio.org/STUDIO/ALGORITHMIC_VISION.

[11] The notion and implementation of a DAO was published by Christoph Jentzsch in the DAO white paper in 2016, see https://blog.slock.it/the-history-of-the-dao-and-lessons-learned-d06740f8cfa5.

[12] These dimensions were discussed during the Vertical Atlas – world.orbit at the Nieuw Instituut Rotterdam in 2020, see https://verticalatlas.hetnieuweinstituut.nl/en/activities/vertical-atlas-worldorbit.

[13] “Resolution Frontier” by Besler and Sons, 2018 see  https://www.beslerandsons.com/projects/resolution-frontier.

[14] E. Thomas, H. Farocki, Working on the Sightlines (Amsterdam: Amsterdam University Press, 2004). 

[15] A toolkit for satellite imagery has been compiled by Andrei Bocin Dumitriu, for the Vertical Atlas – world.orbit project, see https://brainmill.wixsite.com/worldorbit.

[16]  K. Davies, L. Young, Never Never Lands: Unknown Fields (London: AA publishing, 2016).

[17] In Extraction Models and along with Weronika Gajda the exploration of gold as a resource was explored further within the context of New York City’s federal reserve, see  https://www.instagram.com/extraction.models.

[18] This idea was developed by terra0 in: P. Seidler, P. Kolling, M. Hampshire, “Can an augmented forest own and utilise itself?”, white paper, Berlin University of the Arts, Germany, May 2016, https://terra0.org.

[19] There are several projects that propose NFTs as carbon sinks, see https://carbonsink-nfts.com/ and https://nftree.org.

Suggest a Tag for this Article
Weird Flesh 
Antinormativity, Biopower, Production of Normativity, Queer Bodies
Pintian LIU, Fiona Zisch, Ava Aghakouchak

pintian.liu.20@alumni.ucl.ac.uk
Add to Issue
Read Article: 4520 Words

The Production of Normativity 

Of Discipline 

I am sitting at the table, facing my computer, writing the first draft of the paper you are now reading. This paper is published in the Bartlett’s Prospectives Journal at University College London (UCL). UCL sets the disciplinary boundary within which this paper is enclosed. My body, my fingers to be specific, follow a certain trajectory on the keyboard, writing in between the lines that the University has produced. The University, in return, examines and performs edits on the paper that I am writing.     

As in the case above, the integration of my body into a disciplinary institution produces marks on the former, accompanied by certain aesthetic qualities. From the posture my body has taken to write this paper, to the format of this paper, my body is mechanically reproducing words; the journal is an encoded mechanical reproduction of an assembly of papers. The integration of machines as tools for exerting power on bodies, and the Body, which power itself manufactures, emerged in the first industrial revolution (Figure 1). Such integration has grown in intensity as industry and the system it produces become ever larger and more sophisticated. More bodies need to be compressed into the Body so that they can easily be placed under surveillance and control. Emerging from Foucauldian excavation, the shift from disciplinary power to biopower marks the first major expansion of power’s mechanism.[1] 

Figure 1 – Inserting the Body into Industrial Machinery (A. Lex-Nerlinger, Der Maschinist, 1930. Image from: Nouvelle Objective, Centre Pompidou, 2022).

Of Biopower 

Before landing in the UK to start my studies at UCL, I first had to take a tuberculosis medical exam to obtain a student visa. Then, upon landing in the UK, I was required to register with a general practitioner to access health care. The registration form requested categorical information such as gender, ethnicity, and exercise status. The form sought this information in order for my body to be “legible”, in the eyes of the system, to become part of the Species-Body invented by biopower itself: the population.  

“According to Foucault, the disciplinary mechanisms of the body and the regulatory mechanisms of the population constitute the modern incarnation of power relations, labeled as biopower.”[2] Categorising bodies based on biopower’s concept of “the population” produces a normative effect on these very bodies. Under disciplinary power, institutions are concerned with micro behaviours of the bodies held within their boundaries. Under biopower, bodies are no longer unregulated beyond disciplinary institutions’ doors. Medical experts manage how individuals live their lives, and compare them to the overall wellness of the population. The population’s fate hinges on birth and death rates; procreation depends on the nuclear family (Figure 2). The nuclear family becomes the model image that bodies are moulded upon and into. 

Figure 2 – Nuclear Family (H. Armstrong Roberts, Nuclear Family, 1950s. https://www.theatlantic.com/ideas/archive/2020/02/nuclear-family-still-indispensable/606841, accessed 02 Sept. 2021).

Despite disciplinary power and biopower’s different aesthetic consequences, as the factory man (Figure 1) and the nuclear family man (Figure 2) suggest, they are not mutually exclusive but reinforcing means of control. Biopower, a cogent consequence of disciplinary power, is born in a colonial context to protect claims of inheritance and racial superiority of the bourgeoisie families.[3] Its logic is then instrumentalised to ensure the continued insertion of eligible bodies into the machine. For example, the 1890 census taker of the United States, Herman Hollerith, invented the mechanical manipulation of data and consequently founded the predecessor of IBM in 1911.[4] Disciplinary power and biopower both serve as mechanisms for the increasing integration of the Body and machine.  

This paper departs from an analysis of the forces that my body is subjected to. These forces are a product and reflection of the system which we – all bodies – coinhabit. Bodies are actively conditioned into the Body. The conditioning process has evolved over time, in episodes, each episode having its own aesthetic consequences. The self-analytical process of writing this paper follows Descartes’ method in Meditations, which famously creates a psychic doubling of “I” as an object of analysis to extend to the universal foundation of knowledge.[5] This paper, importantly, makes no universal claims. Instead, it uses the experience of “I” – and its extension – to narrate machine’s absorption of bodies, in order to illustrate how diverse bodies are situated within a hegemonic system and to celebrate these diverse bodies’ resistances towards being moulded into the Species-Body. 

Developed as a means of constructing and portraying knowledge through design praxis, the wearable device “Contiguity”, designed in unison with this paper, follows a comparable introspective process by bonding the wearer to a host of queer bodies populating the queer dating network, Grindr. In the eyes of systems of power, queer bodies are “weird” because of their oblique positioning in relation to the Species-Body. Queer bodies’ refusal to be – and become – straight marks a first episode of resistance. 

The Rise of Antinormativity 

Of Resistance: The Deviated Queer Bodies 

My desire for men is my subjectivity’s departure point of deviation. When I was fifteen, my own awareness of my queer sexuality led me to study abroad – a response to China’s heavily disciplinarian post-secondary education. Five years later, on a trip back to visit my family, the receptionist at the public notary office (a government agency in China) looked at my date of birth, then straight into my eyes, and said: “You are getting married too late.” My queer body failed – and fails – to reproduce the straight lines set out by and for the nuclear family. When I had the opportunity to leave, I did. 

The German origin of the word “queer” is “quer”, meaning “oblique” or “perverse”.[6] Quer specifies the spatial and temporal relationships of queer bodies to the world.[7] In this sense, queerness is always relational – the presence of a normative background makes queer bodies appear oblique (Figure 3). In turn, queerness resists normative effects in its ephemeral nature and rhizomatic organisation.[8] It enchants bodies based on local relationships without superimposed logic or structure, therefore resisting both disciplinary power and biopower’s monopolistic claim on the future:  

“The future is only the stuff for some kids. Racialized kids, queer kids, are not the sovereign princes of futurity… This monolithic figure of the child that is indeed always already white… It is important not to hand over futurity to normative white reproductive futurity.”[9] 

Figure 3 – Queer Visibility in the Public Sphere (D. Wojnarowicz, Arthur Rimbaud in New York (Tile floor, gun), 1978. https://www.e-flux.com/announcements/29934/david-wojnarowicz-robert-blanchon, accessed 02 Sept. 2021).

Queerness means a continued investment into alternatives to a white and heteronormative future.[10] It opens different definitions of what kind of life is worth living. Bodies gather based on desires instead of class. This mode of relating allows queerness to form a counterforce to the Species-Body fabricated by biopower. In the world of art, queerness conceptually establishes the counterforce that disrupts the Western canon of beauty in the form of the weird, making room for a multiplicity of beings through aesthetic means. 

From April 1950 to February 1951, Jean Dubuffet initiated a relentless attack on a traditional Western genre of beauty – the female nude. The genre of the Western female nude is composed of clear-cut contours and a pink tone that mimics northern European skin. From The Birth of Venus (Figure 4) to Olympia (Figure 5), the subject of representation shifted from a goddess to a prostitute. Yet, the continuity of monolithic beauty has remained intact. Images of beauty emit a normative effect on beauty standards set for the population. As the West attempted to move past the horrors of WWII, the genre of the female nude collapsed, its representation of the Body becoming less relevant. During this period, Dubuffet produced a collection of thirty oil paintings and seventy drawings called “Ladies’ Bodies”.[11] These bodies form a collective, a collective-like queerness, that challenges the Species-Body aesthetically. 

As part of this collection, The Tree of Fluids (Figure 6) presents a flattened female nude lying bare in front of its viewer. Different from its predecessors, this female nude is not represented as gentle but as monstrous. The pink that mimics a northern European skin tone can still be found, but shades of orange, red, even hints of purple, activate a violent deconstruction of ideal skin. In addition, the texture of sand mixed with paint creates a sense of flow that recalls erupting bodily fluids; the normative female nude run over by a car, leaving the figure flat on the ground, fluids splashing out from its reproductive – and sexualised – parts, spilling all over its body. 

Figure 6. It Girl No. 3 (J. Dubuffet, Tree of Fluids, 1950. https://www.tate.org.uk/art/artworks/dubuffet-the-tree-of-fluids-t07110, accessed 02 Sept. 2021).

In relation to the Western canon of beauty, Dubuffet’s representation of female bodies is weird because of its radical opposition to what can be accounted as normatively beautiful. It challenges the normative notion of beauty similar to queerness’s challenge to heteronormativity’s monopoly of the future. Given queerness’s promises to open futures, capital has unsurprisingly attempted to valorise – and indeed capitalise on – queerness itself by inventing new means of control; one of its means is the queer dating network Grindr. Here, Grindr as a site marks a mutational response from power, attempting to force and secure a productive insertion of queer bodies into the machine, undermining their inherent resistance. 

Figure 7 – The Making of Contiguity (Image by Author, The Making of Contiguity, M. Arch Design for Performance and Interaction, The Bartlett School of Architecture, UCL, 2021).

Antinormative is Profitable: My Abstracted Body 

Of Control: We the Corporations 

Grindr profiles demonstrate the depth to which “societies of control” have penetrated social relations. Deleuze first coined the term “societies of control” in 1992 to describe the shift away from a disciplinary society – a shift enabled by informational technology.[12] An example of informational technology, Grindr pioneered the integration of geolocation into dating apps. Upon opening Grindr, profiles are presented in a grid layout, each profile occupying the same amount of virtual real estate on screen. This equalising effect is further reflected in profiles’ statistics that divide bodies into body types and categories. Body types, for example, are listed as: no response, average, large, muscular, slim, stocky, and toned (Figure 8). These body types are then divided further into categories called tribes (Figure 9). Each tribe reflects a male archetype, which can be used as a search term on porn search engines. These categories serve as entry points for bodies to access standardised desires: 

“… the beefcake flexing as if a cover model for Men’s Fitness; the bear doing his best Paul Bunyan impersonation; the twink posing like a supermodel; the tough guise appropriating hip hop gestures and styles; the jock/bro making certain to display his allegiance to whatever sports team; the boy-next-door, often admittedly an ‘average guy,’ devoid of any specifically gay cultural signifiers, fueling heteroerotic fantasies – all obviously borrowed, banal, willful reversions to types …”[13] 

This conscious construction of digital selves based on existing stereotypes erases the historical struggles of minorities and flattens them into purely aesthetic products. Racial bias and misogyny are deeply rooted in and, in turn, emerge from the development of these stereotypes. The problematic, ocular-centric construction of desires based on visual appearances and socio-cultural connotations relies heavily on the advertising industry. It defines our relationship with products and specifies our role as consumers. Through the lenses of these types, one can only measure the success or failure of their bodies by how they compare to ideal imagery – the Body reproduces sameness through serial repetition as if they were Warhol’s Campbell’s Soup Cans (Figure 10).[14] In essence, virtual cruising and shopping now have ever more similarities. Hoping to stand out from an endless grid of men (it is only “Unlimited” if you pay $19.99 a month), one must promote one’s body as “the body” of each category – what is my brand? For a connection to be made, continuous window shopping and constant comparison is required, mirroring behaviour in a shopping mall – what bodies are available; how does one calculate pleasure based on other listed statistics? 

Figure 10 – We the Soup Cans (A. Warhol, Campbell’s Soup Cans, 1962.  https://www.moma.org/collection/works/79809, accessed 02 Sept. 2021).

An obsession with personal brands and statistical comparisons brings the Body’s mode of being ever closer to corporations. The thinking “I” has increasing similarities with the calculating AI. Adopted in the late nineteenth century, the Fourteenth Amendment gives people in the United States fundamental human rights. However, it also gives corporations the status of being (a) human. Under the neoliberal regime, the Constitution has realised its full consequences. Can we still tell the difference between corporations and ourselves? As we the corporation, are we willing to trade our imperfect profile pictures with a singular image that perfectly conforms to an ideal type? 

Pertinently, the artist Lucy McRae explores the aesthetic potential of radical conformity, for example in her work Biometric Mirror.[15] Beauty brands are deploying increasingly algorithmic services to offer customers personal advice. McRae’s mirror provides viewers with analyses of their characters solely based on their faces. In return, the algorithm calculates a mathematically perfect version of a present face and returns it to its viewer. McRae pushes the concept of beauty advice services to its extreme to explore the aesthetic consequences of a body conforming to algorithmic perfection (Figure 11). The ideal representation is embedded in – and constructed from – a biased dataset that (re)affirms traditional beauty standards. These biases are presented as objective claims of truth by the virtue of their allegiance with “science”. However, compared to the imperfections of human bodies, the personalised ideal representation slips easily into the uncanny valley. Weirdness resides in the gap between bodies in their flesh and their unattainable virtual representation. For the expediency of pleasure, we turn away from the weird and become fungible modules, ready to be exchanged in the neoliberal marketplace of human capital. In this impasse of the present, what and where is the next frontier of resistance to corporations’ valorisation of queerness’ open futures? 

Figure 11 – The Algorithmic Perfect Face (L. Mcrae, Biometric Mirror, 2019 https://www.lucymcrae.net/biometric-mirror-, accessed 02 Sept. 2021).

Under and Out of Control 

Of Measurement 

From the invention of the disciplinary society to the formation of biopower, then to the creation of societies of control, each shift and mutation of power is enabled by – and creates – new means of measuring the Body. The panopticon established the absence or presence of bodies through spatial typology and abstract hierarchy.[16] Statisticians compile aggregate population data to theorise on general trends of wellness, in order to ensure stability of power.[17] Today, with the aid of ubiquitous computing and artificial intelligence, the resolution of the Body and the potential for data extraction is brought to unprecedented levels, placing it under even more comprehensive control. 

Wearables are a form of threshold, where the forces of power that seek to exercise control over bodies meet weird flesh. On the surface of the skin, wearables attempt to materialise the intentions of their creators. Nevertheless, where the Body may not, diverse bodies possess the disposition to resist these forces. Tailoring wearables to distinctive bodies requires the creation of detailed and unique mappings.  

A Creaform HandyScan 700 Scanner is deployed to obtain the map for Contiguity’s intervention. The scanner relies on scanning targets placed randomly on a body (Figure 12). The random pattern generates reference points for the scanner to register and construct local relationships. After the targets are placed, each scanning session takes around 20 minutes. During a scanning session, the body has to stay still, otherwise its movements would register new, or duplicate, parts due to changes in the local relationships of the scanning targets (Figures 13–16).  

Figure 12 – Body through the Eyes of a Handheld Scanner (Image by Author, Contiguity, M. Arch Design for Performance and Interaction, The Bartlett School of Architecture, UCL, 2021. https://www.pinstudio.uk/contiguity, accessed 02 May 2022).

The process of mapping diverse bodies into one virtual body is an objective method invented by power structures to exercise control; however, the duration of the scanning session made space for my subjected, yet subjective, will to resist its mapping. While lasers brushed against the surface of my skin, with my arms opened, eyes closed, I tried to keep my mind and my body as still as possible. As the scanning progressed, my arms became heavier and slowly dropped in the presence of gravity. My virtual body looked increasingly unfamiliar in the eyes of the scanner. Eventually, unfamiliarity turned into monstrosity – the body growing more and more limbs, the surface of the chest starting to peel off the neck to accommodate changes in breathing (Figures 13–15). This monstrous body lacks legibility for power to operate upon. Parts must be restitched together in post-production to return the virtual body to a state of familiarity. During the editing process, my personal, subjective assumptions about my own body manifested in its representation (Figure 16). The gap between the physical and the virtual was my body’s unconscious attempt to escape the order imposed from above, despite my voluntary submission to the scanner. My body was constantly adjusting to its surrounding forces and internal processes, leveraging its flexibility and adaptability to disrupt the power’s process of mapping. 

Of Contiguity 

Developed in conjunction with this paper, Contiguity is a wearable device that absorbs the closest 500 Grindr profiles and transforms them into haptic feedback (Figure 17). Each air chamber of Contiguity corresponds to one of the body type categories. As users around the wearer go online and offline, Contiguity creates weird and unpredictable haptic sensations for the wearer. In contrast to Contiguity’s haptic mapping of surrounding profiles, Grindr’s grid layout and categorisations compress users’ bodies into virtual avatars of the Species-Body. The compression makes bodies legible in the eyes of the machine. Contiguity aims to disrupt the logic of compression with the weird flesh. The flesh is weird in form, made out of silicone skin with inflatable thermoplastic polyurethane backings, and in its communication with surrounding users’ bodies. 

Figure 17 – Haptic Feedback (Image by Author, Contiguity, M. Arch Design for Performance and Interaction, The Bartlett School of Architecture, UCL, 2021. https://www.pinstudio.uk/contiguity, accessed 02 May 2022).

Contiguity is weird in its form because it is designed to transgress erogenous zones of the Body (Figure 18). The neck spills into the chest, and the chest spills into the upper abdomen (Figure 19). The transgressed boundaries make Contiguity’s touch oblique to biopower’s mapping of the Body, therefore challenging its monopolistic claim on pleasure. Who decides how we should be touched and what is seen as pleasurable? Contiguity’s conscious failure to approximate flesh amplifies its queering of the Body. Silicone is a popular material for the production of life-like masks, but the application of melted paint, food colouring and sand between silicone layers disrupt the visual field, creating monstrous bodies, much like in the aforementioned Dubuffet painting. Consequently, Contiguity recasts the representations of bodies and their definitions of intimacy, replacing a self-preserving definition with a world-building one. 

The gap between the flesh and its (virtual) representation is another instigator of weirdness. Compressed bits and bytes of data drawn from surrounding queer bodies are translated into haptic feedback, a sensation of “heartbeats” cast onto the surface of a wearer’s skin. The neoliberal Grindr “meat market” is no longer experienced through discrete encounters, each Body an abstracted, idealised visual product, but is collectively subsumed into pulsing heartbeats Contiguity impress onto the skin. The collective allows us to reexamine our individualistic experiences of consumer desires. In the same way that biopower fabricates the Species-Body to exercise control, Contiguity assembles this new collective to create a sense of togetherness – being together without erasing differences. This togetherness has the power to form new political bodies, to become a counterforce that confronts the violence and crisis brought about by the normative Body. 

Of Bodies and Togetherness 

The legibility of the Body is the normative force, the weird flesh is the departure point of antinormativity. The open futures of antinormativity reside in the gaps between the Body as an ideal representation and diverse bodies in their flesh. Disciplinary power launched the ambitious project of integrating bodies into the machine for the former’s obedience and the latter’s efficiency. The factory man was the perfect man. Following the invention of biopower, the heteronormative couple projected the ideal imagery of the Body. The perfect couple bears the social labour of carrying and raising children, extending the patriarchal lineage, and ensuring the conservation of class and order. To justify imposing control on desires, biopower invented the Species-Body of the population with the aid of statistics to maximise the productivity of bodies. Queerness challenges the Body that biopower has produced in the gaps between imposed desire and the desires of the flesh. Resistance stems from the flesh and spreads across social fields, opening up alternative futures that power structures have yet to come to regulate. In response, biopower mutates with the aid of information technology into societies of control. New categories of representations are invented so that queer bodies can be more productive to the economy. Here, Grindr valorises queerness through the use of body types. These types serve as ideal imagery that queer bodies are measured against – the more conformed one is to the Body and its representation, the more productive you are to the economy. New measuring instruments will always be invented to penetrate the bodies deeper, to open new markets of consumption.  

As Deleuze advises us, “there is no need to fear or hope, but only to look for new weapons”.[18] Contiguity demonstrates that a platform that seeks to partition and exercise control can be appropriated, subverted to build connections that escape the latest means of control. With the current global energy crisis, the all-encompassing system is showing its shortcomings in dealing with the even larger climate crisis. The current system is built to maximise (personal) interests, and can be exercised by an entity as small as a body aiming to fulfil its pleasure – as in the case of Grindr – or as large as a nation state aiming to profit from natural resources. Local disruptions can have undesirable global impacts since technology is deployed with a purpose of exclusion rather than inclusion. Forging a sense of togetherness is the first step to shifting our current technological and aesthetic development towards pluralistic and resilient futures. 

References  

[1] VW. Cisney, N. Morar, “Introduction: Why Biopower? Why now?” Biopower: Foucault and Beyond (The University of Chicago Press, 2016), 3. 

[2] VW. Cisney, N. Morar, “Introduction: Why Biopower? Why now?” Biopower: Foucault and Beyond (The University of Chicago Press, 2016), 5. 

[3] M. Foucault, The History of Sexuality: The Will to Knowledge (Penguin Books, 1998). 

[4] I. Hacking, “Biopower and the Avalanche of Printed Numbers.”, VW. Cisney, N. Morar, ed., Biopower: Foucault and Beyond (The University of Chicago Press, 2016), 76. 

[5] L. Bersani, “Ardent Masturbation.”, Thoughts and Things (The University of Chicago Press, 2015). 

[6] Google’s English Dictionary [Internet], Oxford: Oxford Languages. https://languages.oup.com/google-dictionary-en/ (Accessed 02 Aug. 2021). 

[7] S. Ahmed, Queer Phenomenology (Duke University Press, 2006), 161. 

[8] JE. Munoz, Cruising Utopia: The Then and There of Queer Futurity. (New York University Press, 2009), 65–82. 

[9] Ibid, 95. 

[10] S. Ahmed, Queer Phenomenology (Duke University Press, 2006), 46. 

[11] J. Nairne, Jean Dubuffet – Brutal Beauty (Barbican Art Gallery, 2021). 

[12] G. Deleuze, “Postscript on the Societies of Control”. October, Vol.59 (1992) http://www.jstor.org/stable/778828 (Accessed 02 Aug. 2021) 3–4. 

[13] T. Roach, Screen Love: Queer Intimacies in the Grindr Era (State University of New York Press, 2021), 88. 

[14] Ibid, 18. 

[15] L. McRae, Biometric Mirror (2019) https://www.lucymcrae.net/biometric-mirror- (Accessed 02 Aug. 2021). 

[16] M. Foucault, Discipline and Punish: The Birth of the Prison (Penguin Classics, 2020). 

[17] I. Hacking, “Biopower and the Avalanche of Printed Numbers.”, VW. Cisney, N. Morar, ed., Biopower: Foucault and Beyond (The University of Chicago Press, 2016), 73. 

[18] G. Deleuze, “Postscript on the Societies of Control”. October, Vol.59 (1992) http://www.jstor.org/stable/778828 (Accessed 02 Aug. 2021), 4. 

Suggest a Tag for this Article
Fig. 1. The Geoscope within the Museum of the Future’s Observatory, Certain Measures, 2022.
Fig. 1. The Geoscope within the Museum of the Future’s Observatory, Certain Measures, 2022.
World Pictures and Room-Worlds
AI Diaries, Control Rooms, Fictions, Room Worlds
Andrew Witt

awitt@gsd.harvard.edu
Add to Issue
Read Article: 3811 Words

On December 24, 1968, the three-person crew of lunar spacecraft Apollo 8 became the first humans to witness a shimmering Earth ascend over the barren surface of the moon with their own eyes. The photographs that they took of that “Earthrise” electrified humanity, activating a sense of collective destiny not only between human nations but with Earth itself.[1] This vivid new “world picture” was both more total and more visceral than earlier terrestrial abstractions like globes, atlases or maps. Earthrise was an eidetic portrait of a living, breathing world, an amalgam of the geologic, climatic and biologic, taken from outside the world itself. Historian Benjamin Lazier characterised this meta-Copernican moment as inaugurating an entire “Earthrise era”, a time when the image of a whole and delicate Earth could “organize a myriad of political, moral, scientific, and commercial imaginations”.[2]

In many ways, Apollo’s Earth image was a quintessential product of the space age. Not only did its achievement rely on modern space flight, it played out against the backdrop of global conflicts like the Cold War that exploited space as a proxy battleground. Of course, the space age coincided with the information age, and these two cultural tendencies arguably offered divergent ways to picture the world. If the Apollo photos captured a single static vision of a unified Earth, the information age countertendency was to federate disparate fragments of text, diagrams, images, and video into information-rich dynamic media experiences. Experimental media environments brought visitors inside a closed world of light and image projections, immersing the visitor in choreographed flows of electronic stimuli. The constructed worlds presented within such media environments might resemble, reflect, or subvert the world outside them. Projects like filmmaker Stan VanDerBeek’s Movie Drome or architect Ken Isaacs’ Knowledge Box constructed total media spaces with the visitor at the centre, ensconced in walls saturated by film and slide projections.[3] They effectively constructed mediated worlds within the confines of a single room. Even earlier forays into the mediatic experience of information – notably the Eames Office’s Ovoid Theatre at New York’s 1964 World’s Fair – hinted that the information age would be experienced through choreographed matrices of endless and heterogenous image streams. The spatial array of multiple images induced a relational ordering and systemic framework among them. In these media environments, the world picture was not a single image but an overlapping and federated mosaic, a reality implied through juxtaposition and assembled in the technically-calibrated space of the room-world.

Figure 2 – The Earthrise photograph, taken by Bill Anders on December 1968, from Apollo 8. Image courtesy NASA.

To the extent that they conveyed not the static image of a world picture but rather the dynamic behaviour of a world system, information-age media spaces resembled behavioural models. In his influential lecture “World Pictures and World Models”, German philosopher Hans Blumenberg drew the distinction between world pictures and world models as the “difference between the total notion of nature on the one hand and the purpose assigned to the totality of understanding nature on the other”.[4] By “world picture”, Blumenberg does not exactly intend an Earthrise-like image but rather “that embodiment of reality through which and in which humans recognise themselves, orient their judgements and the goals of their actions, measure their possibilities and necessities, and devise their essential needs”.[5] The world picture thus becomes a metaphysical anchor and compass for the human species in relation to species and nature as a whole. The world model, then, is the end toward which the world might be oriented and perhaps the mechanism that effects its transformation.

This paper considers how the world picture, world model, and room-world interact and resonate in our own time, and how they are transcribed into architectural space. We explore these resonances through a specific project of our office, Certain Measures: The Observatory, an immersive environmental installation housed within Dubai’s new Museum of the Future that imagines a fictional centre for global bioremediation in the year 2071. By situating this project in a wider historical constellation of room-worlds and world pictures, Earth-scale architecture extends its purview to contemporary notions of bioengineering, data visualisation, and artificial intelligence. Moreover, in contrast to canonical room-worlds of the past, the Observatory presents its world picture as a fictional reflection on a possible Earth, rather than as a true image of our world today. In doing so, it orchestrates several overlapping and interlocking layers of worldbuilding: fictional species, fictional media content, and even the fictional bureaucracy in which the Observatory is housed. It diverts the nominally factual media of data visualisation and scientific modelling toward projective worldbuilding. The Observatory thus illustrates the role architects and designers can play as worldbuilders across media, including image, data, narrative, and space.

Room Worlds and Control Rooms

Built to transform the very perception of the future as we know it, Dubai’s new Museum of the Future houses a series of immersive environments that position visitors in an empowering version of tomorrow. The Observatory is one such environment, a fictional centre for planetary ecology staged as a physical and media experience. It is presented as an amalgam of control room, panorama, and incubator for newly designed species, developed to confront the challenges of the climate crisis in a future fiction. It is the culmination of the floor-wide exhibit introducing “the HEAL Institute”, a fictional NGO tasked with gathering the planet’s genetic material, engineering species capable of meeting the challenges of extreme climate, and redeploying these to regreen the world.

The Observatory drew inspiration from the sundry architectures of planetary visualisation of the past century and a half. From building-scale panoramic “great globes” to interactive games of planetary resource use, architectural projects at the scale of the world envisioned designerly ways of seeing, understanding, and shaping Earth. Many of these projects posited not only a particular world picture but a behavioural system for planetary interactions akin to Blumenberg’s world models. In this sense, the Observatory falls into a lineage of architecture that orients design toward a global scale. In surveying the range of world-scale architectural projects, Hashim Sarkis and Rio Salgueiro Barrio point out the “possibility of differentiating between totality and totalization”.[6] The implication is that in the Anthropocene, the systems presented by such world models are not necessarily controlling or coercive, but might be mutually constitutive with Earth itself.

Figure 3 – The Oval room of Teylers Museum as it appeared in the early nineteenth century. Wybrand Hendriks, De Ovale Zaal van Teylers Museum, c. 1800-1820. Image in the public domain.

Beyond the mutuality of system and planet, the form of the Observatory considers the codetermination between a collection of objects and the architecture that displays them. A particularly vivid example of collection-architecture co-determinacy are proto-modern cabinets such as the Oval Room of the Teylers Museum in Haarlem, Netherlands. Historian Geert-Jan Janse describes this singular space as “a room to hold the world”, not merely to house the miscellaneous contents of a world but to construct an architecture fitted to that world picture.[7] Opened in 1784, the Oval Room concentrated its collection into a single space that adopts the organisation of the collection itself, furnishing bespoke cabinetry for irregular objects and reflecting a specific collection taxonomy in its arrangement. The curved space presented no corners, its quasi-elliptical shape evoking the spherical contours of a planet. In this sense it resembled a panorama, a dramatic vista over a field of particulars in orchestrated and interconnected conversation.

Our aim for the Observatory was to extend the architectural type of a Teylers collection panorama with the informatic and multi-scalar view of simultaneous dimensions of planetary ecology. In this way, the historical type of the room world is set in dialogue with the contemporary rise of data science and artificial intelligence. The Observatory accomplishes this by making visible both newly engineered species and the network of human and machinic actors that collect, analyse and act to resuscitate Earth. It is a control room for bioremediation, showing and evolving a web-of-life datascape and the symbiotic interactions of ecosystems, plants, animals, bacteria, robots, and humans.

The Observatory space consists of two complimentary experiences: the Geoscope and the Nursery. The Geoscope is an information-rich global monitoring system that visualises the progress of bespoke species deployed to aid threatened biomes. It combines physical models of speculative species themselves with dynamic projection mapping to show symbiotic interconnections across scales, offering a trans-scalar view of the planet from global to microscopic. The Geoscope could be understood as a dynamic data panorama, or even an informatic world picture. But instead of presenting an instantaneous view of the world from a single perspective at a uniform scale, it presents a temporally unfolding and multi-scalar assemblage of imagery and data, stitched together into a unified sensorium.

Figure 4 – The data visualisations of the Geoscope, tracking the success of species across ecosystems. Certain Measures, 2022.

The Geoscope is not only a collection gallery but also a control room, a cockpit for the planet. As a control room, the Observatory sits adjacent to what anthropologist Shannon Mattern has called “urban dashboards”, or visualisations of real-time urban operations data.[8] When expanded to the room scale, they evolve into what she terms “immersive dashboards”: vast control rooms for city functions that resemble NASA’s Mission Control for spaceflight.[9] Mattern argues that the raison d’être for such rooms is “translating perception into performance, epistemology into ontology”.[10] Urban control rooms thus constitute and condition the subjects that interact with them, creating particular conventions of legibility and action. For Mattern, the “dashboard and its user had to evolve in response to one another”.[11] In the critical relationship between dashboard and intelligibility, a particular data organisation fosters a corresponding kind of intelligence in its observer.

Historian Andrew Toland argues that Mattern’s urban dashboards might naturally be extended to the scale of the planet.[12] “We can begin to imagine an enlargement from the real-time data and feedback loops of urban dashboards considered by Mattern towards a vast integrated and machine-directed system of environmental-sensing and response”.[13] He catalogs several initiatives, such as Microsoft’s “AI for Earth”, that fall comfortably within this genre of design. While he notes the aspiration for an “AI whole Earth dashboard”, Toland frames artificial intelligence in functional terms as a straightforward extrapolation of statistical data analysis. Yet in ethical terms, the idea of AI sentience or reflection – that an AI might come to its own conclusions about the state of the planet – is largely absent. The possibility that the dashboard could become an ethical agent in its own right remains an untested possibility.

Beyond Mattern’s urban dashboards and Toland’s AI for Earth, the Geoscope makes deliberate reference to Buckminster Fuller’s series of geoscopes or “mini-Earth” projects. Beginning from his first room-scale globe, constructed at Cornell University in 1952, through many variants into the 1970s, Fuller proposed augmented planetary models “wherewith humanity can see and read all the spherical data of the Earth’s geography … within the theater of local Universe events”.[14] In their most developed form, Fuller’s geoscopes were data-rich and mediatic portraits of planetary civilisation unfolding over time: “The Geoscope’s electronic computers will store all relevant inventories of world data arranged chronologically, in the order and spacing of discovery, as they have occurred throughout all known history”.[15] Fuller saw the geoscopes as a means to accelerate and intensify the viewing not only of natural phenomena like weather systems and geologic conditions but also of human activity like military deployments or mobility patterns. “With the Geoscope humanity would be able to recognize formerly invisible patterns and thereby to forecast and plan in vastly greater magnitude than before”.[16]

Curiously, Fuller ignored the living organisms within the biosphere except in their direct and extractive connection with agriculture. Thus, in deliberate riposte, our Geoscope sees the human technosphere in intimate dialogue with the biosphere, not as an extractive system but as a symbiotic relationship in which humans have a vital role. The Geoscope’s AI, which acts as an intermediary between technosphere and biospehere, scans specific locations – the Ganges River Delta, Antarctic Inland, the Empty Quarter of the Emirates, Canada’s Nunavut territory and so forth – for progress against climate catastrophe. As a central digital globe turns, it reveals new points of crisis, but also signs of hopeful recovery. It projects a protean and continuously changing view into the network of monitoring stations across the planet. The coordinating AI dynamically connects with a menagerie of human and nonhuman agents across biomes and nations – including drones, satellites and hybrid techno-biological sensors – which constantly collect samples, register progress, and meticulously rebuild the planet. This menagerie of agents complements the biological menagerie of newly-engineered species gestating within the Observatory. The coordinating AI slowly becomes more aware of human culpability for climate change – and its own fraught role in regreening. The Geoscope thus offers a glimpse into the expanding ethical consciousness of this AI.

Experientially, the Geoscope operates like closed-circuit television for the planet. It presents a cluster of video feeds that track the thriving species introduced by the HEAL institute on the one hand and the research of the scientists of the HEAL institute on the other. The myriad seeded species include, for example: a comb jelly super organism that signals danger by bioluminescent flashes; cryptobiotic wildflowers designed to hibernate in steppe and tundra regions; and fire-resistant trees with robust roots to resist infernal heat. At the same time, the Geoscope streams surveillance footage of scientists tirelessly working to enact the techniques of re-greening of the earth. These scientists engage with deployed species through forensic fieldwork and careful labwork. We even witness moments of painstaking analysis as they prepare samples for review of soil toxins, trace carbohydrates, and other critical biomarkers. In effect, this planetary CCTV invites visitors to join in the on-the-ground work of the HEAL institute.

Fig. 5. Examples of the species diorama presented in the Observatory. Certain Measures, 2022.

In the Nursery, the other half of the Observatory experience, visitors peer into incubators nurturing dozens of species that could revitalise a struggling planet. In collaboration with a geneticist, we designed over 80 species of plant, insect and animal, each with special characteristics designed to combat the environmental challenges of today and the future. Drawn from seven major ecosystems – desert, aquatic, arctic, forest, swamp, alpine and grassland – we imagined species such as nutrient jelly cacti, radiation-sequestering flowers, lipid-rich quinoa, and remediation coral designed to feed on microplastics and sequester heavy metals. To facilitate rapid repopulation of bird species, a portable multispecies egg incubator could be used to quickly reestablish biological diversity in previously inhospitable areas. At the microscopic scale, designer bacteria symbiotically support larger species and the broader biome. These bacteria include cancer-hunting and sunscreen-producing varieties, for instance. Enhanced with holographic data, profiles of each specimen reveal to visitors the details of the organism and its role in a remediated Earth.

Fig. 6. A biome incubator pod which combines several species. Certain Measures, 2022.

Like the Observatory itself, the model dioramas representing new species are in conscious dialogue with the dioramas and conventions of natural history museums: each cryptobiological species was meticulously researched, and is complete with a scientific name, specific climate-robust features, and estimated lifecycles. There is an encyclopedic impulse in their collection, an attempt to convey the variety and possibility of nature across its variegated climates. Some dioramas present assembled biomes, habitats in miniature that arrange numerous species in symbiotic constellation. In a sense, the dioramas are not only biological but agricultural: they display the implements and technology of cultivation and accelerated growth, and in this way also echo one of the earliest roles of museum dioramas, to educate on the process of machinic cultivation of nature.[17]

AI Diaries

The posthuman perspective of a sentient AI monitoring Earth in the Observatory raises strange questions about the subjectivity of the AI itself. Is this AI an overlord, servant, friend, or colleague? How would this agent come to terms with climate catastrophe and its role in the rebirth of the planet? How would its ethical consciousness unfold? What role would its human colleagues play in this awakening, and how might it perceive that role? What story would the AI tell about itself?

The logs of the AI’s interactions actually comprise an intimate journal of sorts, a glimpse into its ethical awakening. The AI communicates with the visitor and the network of remote agents through transmissions and messages akin to letters, and the AI is also receiving messages via its sensor network from myriad species – an interspecies communication between natural and artificial life. Taken collectively, these messages bear a surprising resemblance to the venerable literary form of epistolary fiction. An epistolary novel is a story that unfolds entirely through fictional letters, messages, or transmissions between its sundry characters, exposing their intimate thoughts and interpersonal connections. As a literary form, it was notably popular in the eighteenth century. The epistolary form has a particularly interesting connection to technology, science fiction and bioengineering, in that Mary Shelly’s Frankenstein is an epistolary novel. The epistolary form could even extend to electronic and machine-readable messages, such as Carl Steadman’s Two Solitudes, a 1995 novella told entirely through email exchanges.

In keeping with the panoramic nature of the Observatory itself, we combined the content of the epistolary AI novel with the format of a panoramic book, drawing on precedents like Ed Ruscha’s Every Building on the Sunset Strip.[18] While Ruscha constructed a linear panorama of an urban streetscape, we propose a linear panorama of the sequential scan of the entire Earth, including every new bioengineered species introduced to it. The resulting text fuses AI diary and panorama into a journal of exchanges between this AI and its various human interlocutors. This yet-to-be published book, tentatively titled Dispatches from a Verdant Tomorrow, tells the story of climate remediation from a nonhuman perspective, as one continuous scan of Earth’s biosphere.

Fig. 7. A view of the Nursery within the Observatory. Certain Measures, 2022.

A Future Archive of Fictions

In his critique of the globe as an epistemic model, philosopher Peter Sloterdijk distinguishes between the epistemic ramifications of observing the globe from the outside or from the inside. Seeing the globe from the outside – as with the Apollo Earthriseprovides an “all-collecting awareness … the thinker feels and understands what it means to ‘know’ everything, to see everything visible, to recognize everything … the very epitome of objectivity”.[19] In contrast, the interior view places “oneself at the absolute center”, in “ecstatic-circumspective concentricity”: presumably an experience of complete subjectivity.[20] Yet between inside and outside lies the world itself, a moment at which globe and observer are coincident, one embedded in and inhabiting the other. It is that moment of coincidence and embeddedness that the Observatory aims to make tangible.

Historian Benjamin Lazier notes a similar polarity between environment and globe that illustrates how mutually defining they have become:

“The globalization of the world picture is perhaps easier to discern when we consider a parallel slippage – from ‘environment’ to ‘globe’ as it is inscribed in the phrase ‘global environment.’ The term has become a platitude, even a ritual incantation. It is in truth a Frankenstein phrase that sutures together words referring to horizons of incompatible scale and experience. Environments surround us. We live within them. Globes stand before us. We observe and act upon them from without. Globes are things that we make. They are artifacts. Environments, at least in theory and in part, are not.”[21]

The Observatory sits at that threshold between globe and environment, oscillating between the two but also introducing a third possibility: an experience of situated habitation and networked action. Through intersecting practices of speculative design, biofutures, fiction and data visualisation, the Observatory represents a comprehensive simulation of a connected biotechnical ecology.

In their analysis of urban data visualisation installations, Nanna Verhoeff and Karin van Es describe the city as a “navigable archive” and, indeed, one might make the same claim about Earth itself through the instrument of the Observatory.[22] The Observatory is a device not only for measuring and dimensioning a planetary biological archive but also for cultivating new specimens and Earth itself as an organism. It is a staging area for an active engagement between myriad human and nonhuman actors with each other and Earth itself. It is the terminus of a planetary-scale nervous system but also a sentient agent of action. It is a medium of communication with the planet, a telephone to Earth, a device for engaging in dialogue with it and its inhabitants. The Observatory is a proving ground for a more humane humanity, a tool through which we might take stock of the future of Earth and of design itself.

References

[1] R. Poole, Earthrise: How Man First Saw the Earth (New Haven: Yale University Press, 2010).

[2] B. Lazier, “Earthrise; or, The Globalization of the World Picture,” American Historical Review, June 2011, 606.

[3] G. Sutton, The Experience Machine: Stan VanDerBeek’s Movie-Drome and Expanded Cinema (Cambridge: MIT Press, 2015).

[4] H. Blumenberg, “World Pictures and World Models,” in History, Metaphors, Fables: A Hans Blumenberg Reader, Kroll, Joe Paul, Fuchs, Florian, Bajohr, Hannes, eds. (Ithica: Cornell University Press,2020), 43.

[5] Ibid., 43.

[6] H. Sarkis, Roi Salgueiro Barrio and Gabriel Kozlowski, The World as an Architectural Project (Cambridge: MIT Press), 8.

[7] G-J Janse, A Room to Hold the World. The Oval Room at Teylers Museum (Amsterdam: Teylers Museum, 2011)

[8] S. Mattern, “Mission Control: A History of the Urban Dashboard”, Places Journal, March 2015, <https://doi.org/10.22269/150309>, accessed 09 June 2022.

[9] Ibid.

[10] Ibid.

[11] Ibid.

[12] A. Toland, The Learning Machine and the Spaceship in the Garden. AI and the design of planetary ‘nature’ RA. Revista de Arquitectura Núm. 20 (2018), 216–227

[13] Ibid., 225.

[14] R. Buckminster Fuller, The Critical Path (New York: St. Martin’s Press, 1981), 172.

[15] Ibid., 180.

[16] Ibid., 183.

[17] J. Insley, “Little Landscapes: Agriculture, Dioramas, and the Science Museum,” Icon, 12 (2006): 8.

[18] E. Ruscha, Every Building on the Sunset Strip (Los Angeles: E. Ruscha, 1966).

[19] P. Sloterdijk, Spheres Volume 2: Globes (Pasadena: Semiotext(e), 2014), 85.

[20] Ibid., 88.

[21] B. Lazier, “Earthrise; or, The Globalization of the World Picture,” American Historical Review, June 2011, 614-615.

[22] N. Verhoeff and K. van Es, “Situated Installations for Urban Data Visualization: Interfacing the Archive-City”, in Visualizing the Street: New Practices of Documenting, Navigating and Imagining the City, P. Dibazar and J. Naeff, eds (Amsterdam: Amsterdam UP, 2018).

Suggest a Tag for this Article
B-Pro Open Seminar: Climate F(r)ictions, 27 April 2022, The Bartlett School of Architecture, UCL
B-Pro Open Seminar: Climate F(r)ictions, 27 April 2022, The Bartlett School of Architecture, UCL
Editor’s Note
co-learning, Editorial Note, education and pedagogy, open source, Prospectives, self-cultivation
Provides Ng

provides.ng.19@ucl.ac.uk
Add to Issue
Read Article: 1213 Words

Welcome to Prospectives!

半畝方塘一鑑開,天光雲影共徘徊。 問渠哪得清如許?爲有源頭活水來。
– 朱熹(1130–1200年)《活水亭觀書有感二首·其一》

“Half an acre of oblong pond – one that is open as a mirror,
in it, the light of sky and shadow of clouds co-linger.
One asks: how can it be so clear?
For there is a source of living water.”
– Zhu Xi (1130–1200 AD), GUAN SHU YOU GAN (“Two Thoughts from Reading Books at Living Water Pavilion”: PART I)

好雨知時節,當春乃發生。隨風潛入夜,潤物細無聲。
– 杜甫(712–770年)《春夜喜雨》

“Good rain knows the season, when spring is here.
It sneaks into the night wind, moistening things fine and silently.”
– Du Fu (712–770 AD), “Delighting in Rain on a Spring Night”

大學之道,在明明德,在親民,在止於至善。 … 物格而後知至;知至而後意誠;意誠而後心正;心正而後身修;身修而後家齊;家齊而後國治;國治而後天下平。自天子以至於庶人,壹是皆以修身為本。
– 《大學·禮記》(公元前770–476/403年)

“The way of great learning consists in manifesting one’s bright virtue, consists in loving the people, consists in stopping in perfect goodness. … When things are investigated, knowledge is extended. When knowledge is extended, the will becomes sincere. When the will is sincere, the mind is correct. When the mind is correct, the self is cultivated. When the self is cultivated, the clan is harmonised. When the clan is harmonised, the country is well governed. When the country is well governed, there will be peace throughout the land. From the king down to the common people, all must regard the cultivation of the self as the most essential thing.”
– The Great Learning, The Book of Rites (770­­–476/403 BC) (Translated by A. Charles Muller, July 4, 1992)

With this trilogy of excerpts, I sincerely welcome you to another issue of Prospectives: a literary platform that is free and open to all. As a lecturer of History and Theory at the B-pro, I am grateful to say that I have the best of teachers – the consolidation of thousands of years of world history and theory – and I hope that Prospectives’ readers can and will also learn from the best. With The Bartlett’s efforts in promoting equality, diversity and inclusivity (EDI), we always encourage students to embed their own cultural ontology in their study; interculturality and interdisciplinarity are novelty in research, and add to the efforts in spawning shared cultural expressions and mutual respect through reciprocal understanding.

Searching through my own culture, the three excerpts above – respectively from the 12th century AD, the 8th century AD, and the 8th century BC – are chosen because of their timelessness. On the other hand, matters of open-sourcing, education, co-learning and self-cultivation are as timely as ever; traditional institutions are simultaneously challenged and complemented by new ways of learning.

The first excerpt is a metaphorical poem of Chinese landscapes (借景喻理), taking an open pond as an analogy for a clear mind, able to reflect as clearly as a mirror. How can the mind be clear? “For there is a source of living water” – which speaks to me of open sourcing.

At the same time, the clearest mirror of all is history (以史為鏡):

以人為鑑,可以明得失;以史為鑑,可以知興替
–(李世民, 598–649年)

Taking people as a mirror, you can understand the pros and cons; taking history as a mirror, you can know the ups and downs.”
– (Emperor Taizong of Tang, 598–649 AD)

In more recent history, when Martin Heidegger was interviewed for Der Spiegel in 1966, he said that “academic ‘freedom’ was only too often a negative one: freedom from the effort to surrender oneself to what a scientific study demands in terms of reflection and meditation.” To reverse engineer this, then, a positive freedom demands reflection and meditation. Coming from a philosopher who is famous for his reflections and meditations on a hammer and its relationship to “being”, his thinking testifies that “when things are investigated, knowledge is extended”. What is the value of extending knowledge? Sincerity, correct minds, cultivated self, harmony in governance, and peace: “From the king down to the common people, all must regard the cultivation of the self as the most essential thing.” In other words, investigate things so that we may know how to be in this world. Such is the urgency in our epoch of climate change, which demands collective reflections and meditations – or co-learning.

Lastly, what determines good education? Good education is like fine rain in springtime: it comes at the right season; not early, not late – it teaches according to each individual’s aptitude and tempo (因材施教). It washes and enriches, quiet and non-clamorous – it teaches by example, beyond the verbal (身教重於言教). It is fine and gentle, it cultivates the environment, day and night – so that knowledge and virtues may immerse the ears and imbue the eyes (耳濡目染).

Issue 03: Climate F(r)ictions

Following those reflections on rain, ponds, and water, perhaps there is no better segue to the discussion of Climate F(r)iction – a polysemy of climate friction and fictions (Cli-Fi). According to a journal article published in 2003 by ‪B. Levrard and J. Laskar, “[d]elayed responses in the [ice/water] mass redistribution may introduce a secular term in the obliquity evolution, a phenomenon called ‘climate friction’”. Although this piece of research was investigating the Earth’s major glacial episodes, which took place on a geologic timescale, it nevertheless warns us that the consequences of our actions may lead to immediate effects on a planetary scale, and of a magnitude beyond the imagination of any Cli-Fi.

Curated by our very own Déborah López and Hadin Charbel at the B-pro, “Climate F(r)iction” is an issue that looks at the intersection of ecologies, technologies, and ideologies. López and Charbel, who are architects and founders of the Pareid studio, lead Research Cluster 1 “Monumental Wastelands” at the B-pro, which focuses on cli-migration and autonomous ecologies, using methods of “decoding” and “recoding” through Cli-Fi.

In the production of this issue, an exceptional panel of guests were invited to participate in an open-seminar and roundtable on 27 April 2022 at the Bartlett B-pro. The work and methodologies which they have used to scrutinise, communicate, and respond to our techno-climatic future(s) were incredibly diverse, and yet, their combined contributions reminded me, above all, of a line spoken by Rufus Scrimgeour: “These are dark times, there is no denying. Our world has perhaps faced no greater threat than it does today.” These words may have been spoken in a work of fiction and in an entirely different context, but despite this, the sentiment should not be taken lightly.

Acknowledgements

I have here tried to curb my own tendency to assemble hopelessly long lists of acknowledgments – Prospectives is blessed to have been indulged by numerous supporters – but as those who have contributed to Prospectives and the B-pro continue to serve relentlessly, please do refer to the acknowledgements in Issue 02.

Nevertheless, I must give thanks once again to those who have strived and delivered within the timeline, especially our authors, curators, advisory board members, copyeditor and proof-reader Dan Wheeler, web-developer Arjun Harrison-Mann, our research assistants, and all the professional services staff. Most important of all, our internal senior advisors – Professor Mario Carpo, Professor Frédéric Migayrou, Roberto Bottazzi, Andrew Porter, Gilles Retsin, and Professor Bob Sheil – without whom Prospectives would not have been possible. Last but not least, our Managing Editor Mollie Claypool, who has made the ground fertile for the germination and growth of ideas.

Prospectives has been generously supported by our subscribers and readers, as well as the Architecture Projects Fund (The Bartlett School of Architecture, University College London), which enables authors and readers to publish and access knowledge free of charge. With this, I shall leave you to enjoy the third issue of Prospectives: Climate F(r)ictions.

Suggest a Tag for this Article
Figure 9 – Climate Squatters Community (The Bartlett AD RC 1, 2021-22, Project: Climate Squatters, Team 2)
Figure 9 – Climate Squatters Community (The Bartlett AD RC 1, 2021-22, Project: Climate Squatters, Team 2)
The Apparatus of Surveillance  
Algorithmic, Apparatus, Biopower, Climate Migrants, Necropolitical, Public Engagement in the Apparatus
Nora Aldughaither

norah.aldughaither.21@ucl.ac.uk
Add to Issue
Read Article: 3749 Words

Climate Migrants in the Algorithmic Age 

Technological developments have induced the parallel discourse of the bond between ethics, exploitation and data. Advancements in technology have allowed for a contemporary form of resource extraction and appropriation, normalising the extractive practices of data resources from users, often without their knowledge. Through our increased dependence on technology and connected devices, we are faced with the ubiquitous effects of an algorithmic mode of governance operating on predictive processes that limit our options and control our choices. Indeed, data provides progress and development while simultaneously controlling, governing and abandoning. The algorithmic influence creates new concentrations of power in the hands of institutions and corporate entities that own and collect data.[1] 

“It is no longer enough to automate information flows about us; the goal now is to automate us.”[2] 

A planetary-scale disaster is looming, falling unevenly on the unprivileged of the world, displacing them due to its impacts on their territory. This catastrophic event will create large numbers of climate migrants who will simultaneously face the obstacles of our modern world’s algorithmic governance. Climate change is a planetary problem, but its consequences are felt differently around the world, creating a climate injustice, as some areas, especially in the global south, are more vulnerable than others (Figure 1). “We face the ugly reality of planetary scale ecological disaster, one that is falling unevenly on the world’s underprivileged and dispossessed populations.”[3] 

Today’s concern is about those who represent the margins of society, such as refugees and climate migrants, who struggle to function under this new mechanism of algorithmic domination. Since they are perceived as incalculable, it will place discriminatory impacts on their habitability by utilising methods of exclusion that are biased towards the system, creating controlled spaces based on an algorithm marked by segregation and surveillance. They have been exposed to extraction and predation but are later drained and excluded; reducing people who have been exhausted to mere data, as their behaviours, desires and dreams become predictable, thus making them expendable.[4] These governance technologies produce new power instruments that facilitate modes of prediction and calculation, which treat life as an object calculable by computers.[5] 

The research will explore the necropolitical impacts of an algorithmic governance on climate migrants. It will then investigate the notion of the apparatus and how digital technologies extend Michael Foucault’s idea of the apparatus as a tool for capturing and controlling. Since technology has the quality of being planetary, this research will speculate on the role of a participatory digital system in the lives of climate migrants, following the Fun Palace principles, which aim to operate on autonomous and non-extractive policies and the opposition to surveillance and control.  

Figure 1 – Dotdotdot, Planet Calls – Imaging Climate Change (2021), Museum of Art, Architecture and Technology, Lisbon. 

Necropolitical Effects on Climate Migrants 

Novel resource extraction and exploitation practices have emerged with technological acceleration, where data is considered a vital material to harness. Usman Haque asserts that the addiction of collecting more data to make the algorithm work better leaves behind a surplus of the population who are reduced to matter.[6] Data is often extracted from people and consumed by institutions to be utilised and commodified, “reducing all that exists to the category of objects and matter”, according to Achille Mbembe’s notion of Necropolitics.[7] The governance mode is shifting from humans to technology that can dehumanise people, turn them into data-producing tools, and reduce others who are deemed surplus into superfluous bodies, abdicating any responsibility towards them.[8] This is a mode of authority that leaves behind a portion of the population deemed useless, including climate migrants, who are incapable of being exploited under this mode of governance that is dependent on user-generated data. Threatened by climate-induced catastrophes, these climate migrants fled, as their part of the world has become inhospitable, occupying an in-between borderland space incapable of navigating the contemporary world of algorithmic governance. 

Ezekiel Dixon-Román states that algorithms examining our data shape and form our lives.[9] The raw data extracted is analysed by processes that are owned by companies and then relayed back to humans, making them passive receptors with minimal participation. This creates a system that breaks what we perceive as necessary, reduces our perspectives, and transforms humanity into the category of matter and objects, in what Mbembe defines as Brutalisme.[10] Mbembe draws this term from architecture to describe a process of transforming humanity and reducing it into matter and energy. As technology threatens to change people’s perceptions and turn them into artefacts through processes of exploitation, appropriation and Brutalisme, we confront the necropolitical consequence of what the algorithm deems as superfluous in the algorithmic age, which is reducing humans to a state where they are expendable. It is through Brutalisme that Necropolitics is being actualised. 

Haque argues that institutions have a growing tendency to abdicate responsibility for the sake of decisions generated by the algorithm,[11] but this poses a considerable concern when employed in necropolitical systems that decide who lives and who dies. As in the case of self-driving military drones, Rosi Baraidotti echoes the worry, stating that in the Netherlands military academy they are deeply concerned about the code of conduct of drone firing.[12] Humans are reduced to pixels on a screen, where missiles are fired to eliminate a pixel on a grid. What happens when Necropolitics is adopted in the digital world is what Ramon Amaro describes in the process of an algorithmic design; there will always be a contingency, indicating that something or someone will be left behind.[13] That occurs through a process of optimisation or the skilful removal of waste, whether that waste is time, effort or human.[14] The algorithmic process will mostly fail to consider climate migrants who have been displaced due to the calamities of anthropogenic climate change on their territory, thus making it uninhabitable.  

Biopower Tool 

This algorithmic governance is operated by digital devices, a form of apparatus of surveillance and control. Apparatus in this discourse references both Foucault’s definition and Giorgio Agamben’s interpretation – a translation of the French word dispositif, used by Foucault in 1970 to describe “a series of discourses, institutions, architectural forms, regulatory decisions, … that work as a technology of power and subjectivation”.[15] Agamben further describes apparatus as “anything that has in some way the capacity to capture, orient, determine … the gestures, behaviours or discourses of living beings”.[16] He does not limit it to instruments whose connection with power is evident but also includes computers and cellular telephones, amongst others. 

Digital devices function as an apparatus by capturing our data and controlling our behaviours, operating as an instrument of power in the hands of the people who own this algorithmic mode of governance. In Foucauldian terms, they are a form of disciplinary tool and a biopolitical technique of “subjectivation” that appeared from the capitalist regime to place a novel model of governmentality on the people. Thus, a new form of capitalism appears, filled with control apparatuses in the hands of the powerful few, as the technologies of this capitalistic culture have the power to become embedded in our body, capturing our behaviours and controlling our actions. “Foucault claims that a dispositif creates its own new rationality and addresses urgent needs.”[17] These needs are apparent, as capitalist institutions aim to collect more data, monetising from people’s lives, with the excuse of providing a better service. 

Public Engagement in the Apparatus 

Data collection and extraction is a massive profit to data collectors that sometimes comes at the users’ expense; the power of algorithmic authority should be used to facilitate justice, autonomy and transparency. The focus is on exploring a participatory system, responding to the extractive technologies and how they progressively influence the lives of vulnerable individuals such as climate migrants. Adopting these practices would allow for co-designing future digital technologies that would otherwise stand in the way of mobility. Participation should be an extensive involvement and contribution – such as in the “Fun Palace” concept by architect Cedric Price, where the users became the designers. A similar approach could be utilised in a participatory system where climate migrants could be more involved in the systems that dictate their future. 

Exploring a Virtual Fun Palace 

The Fun Palace is a social experiment which opposes those forms of social control that inevitably influence the usage of public spaces. Exploring a participatory system that could ensure autonomy and flexibility by analysing the application of the Fun Palace’s principles virtually is required. Its fundamentals could permit autonomy, thus undermining current structures of power and control. Digital platforms could apply the same notion of accessibility, flexibility and autonomy to the user, and oppose control and surveillance. Technologies that underpin current forms of control could allow novel methods of cooperation if their use were to transform.[18] 

Price pioneered the integration of recent technologies to inform his architecture; however, in this case, the Fun Palace can be used to inform technology. Price’s concept aimed to use a bias-free technology that learns solely from its users, not for profit gain but for participation and transparency – creating a participatory architecture with the ability to respond to its users’ needs and desires: “His design for the Fun Palace would acknowledge the inevitability of change, chance and indeterminacy by incorporating uncertainties as integral to a continuously evolving process modelled after self-regulating organic processes and computer codes.”[19] 

Cybernetics and Indeterminacy 

Price enrolled Gordon Pask, an expert cybernetician, whose involvement in the Fun Palace allowed Price to achieve his goals of a new concept that integrated his interest in change and indeterminacy.[20] Pask was interested in underspecified and observer-constructed goals that oppose the goals of technologies of control. The Fun Palace program accommodated change, as it could anticipate unpredictable phenomena that did not rely on a determined program.[21] These methods of granting freedom, participation and sharing scientific knowledge to the users were meant to overrule authoritarian control for the sake of an autonomous one.  

Adaptability and flexibility in responding to users’ needs required cybernetics for participants to communicate with the building (Figure 2). Pask’s conversation theory was the essence of the program, moving a step closer to authentic autonomy in a genuinely collaborative system.[22] Underspecified goals oppose systems where the designer initially programs all parts and behaviours of a design, limiting the system’s functions to the designer’s prediction of deterministic goals. Predetermined systems keep the user under the control of the machine and its preconfigured system, since they can only respond to pre-programmed behaviour. These systems eliminate the slight control users have over their surroundings and necessitate that they instead put their trust in the assumptions of the system’s designers.[23] 

Currently, as Haque states, “Pask’s Conversation Theory seems particularly important because it suggests how, in the growing field of ubiquitous computing, humans, devices and their shared environments might coexist in a mutually constructive relationship”.[24] A model that ensures the collective goals of users are reached through their direct actions and behaviours – and that those goals are desired and approved by the users – is the kind of model that digital technologies should aim for. The program of the Fun Palace was autonomous in that there was no authoritative hierarchy that dictated the program and space usage.  

Transparency, Control and Participation 

Designed as a machine with an interactive and dynamic nature, the Fun Palace implemented novel user participation and control applications. Cybernetician Roy Ascott proposed the “Pillar of Information”, which was an accessible electronic kiosk placed at the entrance that could search for and reveal information. “This system was among the earliest proposals for public access to computers to store and retrieve information from a vast database.”[25] As implemented in the Fun Palace, “a cybernetic approach does not reject or invalidate the use of data; instead, it suggests that a different role for data needs to be perceived in the process of intervening in disadvantages and creating social change”.[26] 

Price’s concern related to the effect architecture had on its users. He was convinced that it should be more than a shelter containing users’ activities, being also a supporter of them, with the users’ emancipation and empowerment as its true objectives. The control is thus shifted from the architects to the users, allowing the users to be responsible for constructing the world around them. Digital technologies should not divert their objective of ensuring convenience and empowering the people for the sake of data extraction for profit, surveillance and control.  

Climate Migrants in a Participatory System  

A platform cooperative for climate migrants that aims to ensure the interest of all, and to increase transparency and democracy, would be a departure from the extractive and authoritative system. A participatory and open digital design would allow the freedom of climate migrants from the restraints of their preconceived, biased, incorrect digital profiles created by algorithms. This system would contribute to the rise of autonomy, privacy and freedom for climate migrants. It would be a cooperative, transparent and user-centred approach for seeking common objectives that minimises concerns about profiling, collection of personal data and surveillance. 

Climate Squatters 

The implementation of a virtual participatory platform for climate migrants was explored in the design project “Climate Squatters” by The Bartlett AD Research Cluster 1, 2021-22, Team 2. Climate migrants from the village of Happisburgh would utilise a participatory digital platform that enables them to travel intelligently as modern squatters, allowing them to be active agents in their relocation, habitation and migration process. A non-extractivist and autonomous communal unity without fixed habitation, the project forms around the idea of granting climate migrants autonomy, flexibility and empowerment in their continuous relocation process triggered by the existential threat of coastal erosion. Climate Squatters’ platform aims to address the issues of decreased ownership and control by reconceptualising the user’s roles, acting as an active contributor in the process.  

Happisburgh is a village on the eastern coast of the United Kingdom. It lies in one of the most dangerous areas of coastal erosion in the UK, where it is estimated that Happisburgh will lose around one hundred metres of its coastal land during the next twenty years (Figure 5). The erosion rate has significantly increased due to rising sea levels and climate change. The current governmental coastal management plan is No Active Intervention, which means no investment will be made in defending against flooding or erosion. This plan signifies that there is no sustainable option for coastal defences, due to current coastal processes, sea level rise and national policy, which fails to respond to the people’s needs and makes them feel disregarded.

Figure 5 – Happisburgh Coastal Erosion (The Bartlett AD RC 1, 2021-22, Project: Climate Squatters, Team 2).

Using Climate Squatters’ platform would empower the climate migrants in the various aspects of the migration process. The platform allows autonomy by granting the users the option to participate in the process and vote on where they would like to relocate from a list of suitable land options. Placing a heavy value on the community, the platform starts by decoding the village’s typology, material and identity using machine learning. Happisburgh is “decommissioned” by disassembling what is salvageable from the houses into voxelised masses. The constant migration of the climate squatters requires a unique construction that optimises space and material and allows for easy assembly and disassembly. The recoding of the future habitat of climate migrants operates by utilising wave function collapse to generate their new typologies. The live platform will also sustain the community by analysing relevant incentives and taking advantage of them, giving the users a live view of their performance and future expectations to maintain or enhance their position. 

Figure 6 – Decoding with Heatmaps and Machine Learning (The Bartlett AD RC 1, 2021-22, Project: Climate Squatters, Team 2).
Figure 7 – Beyond Voxels (The Bartlett AD RC 1, 2021-22, Project: Climate Squatters, Team 2)
Figure 7 – Beyond Voxels (The Bartlett AD RC 1, 2021-22, Project: Climate Squatters, Team 2).
Figure 8 – Platform House Generation and Allocation (The Bartlett AD RC 1, 2021-22, Project: Climate Squatters, Team 2).

The platform aims to instil trust in the user and grant them autonomy and flexibility by operating as a non-extractive tool, without predetermined goals, that will empower the user in their journey and ensure their secure habitation in a world of uncertainties. It also aims to learn from the users’ behaviours and to operate on a method of buildable knowledge, continuously evolving based on users’ objectives. By redistributing the roles between the users and the platform, the model ensures that the platform will function as an enabler and supporter of the user. Following Price’s model, the employment of uncertainty and indeterminacy would help climate migrants navigate a journey filled with unpredictable events, thus advancing the dialogue between users and the digital platform. Climate Squatters’ platform seeks to enhance autonomy, flexibility and freedom, and to create a community of climate squatters that represent a response to an ever-changing world due to the consequences of climate change. 

Figure 9 – Climate Squatters Community (The Bartlett AD RC 1, 2021-22, Project: Climate Squatters, Team 2)
Figure 9 – Climate Squatters Community (The Bartlett AD RC 1, 2021-22, Project: Climate Squatters, Team 2).

Digital technologies could challenge traditional models that place a dichotomy between designer and user. Instead, a method can be realised where the user can take a primary role within the system in which they participate, contrasting the prevailing approach of predefined and predetermined systems that restrict the users. “It is about designing tools that people themselves may use to construct – in the broadest sense of the word – their environments and, as a result, build their own sense of agency.”[27] The control is then transferred to the users, where the users are responsible for constructing the world around them. 

Utilising the Fun Palace principles in digital technologies will benefit climate migrants by delivering them a neutral and virtual space to navigate the world without the intrusion of biased algorithms. Non-extractive technologies will prove helpful for climate migrants as they aim to be mobile once climate change has rendered their current home unfit for habitation. Giving the users control of their data will create a transparent digital platform to counter the current extractive and control apparatus. 

A new platform cooperative for climate migrants should be considered to protect their future with transparency, empowerment and equality. Centred around bias elimination and avoiding the harvesting of personal data, this new system would prove more beneficial than capitalism’s current apparatus. This method could enable new modes of freedom, security and emancipation for climate migrants; a system that reduces data extraction, exploitation and bias, promoting a safe, flexible and autonomous approach. A participatory method could potentially alter the biased and surveillance-ridden systems that dominate the digital world. 

References 

[1] A. Mbembe, Theory in Crisis Seminar “Notes on Brutalism” (online), 2020 (accessed 22 November 2021). Available from: https://www.youtube.com/watch?v=tc34afvyL68.

[2] S. Zuboff, The Age of Surveillance Capitalism (London: Profile Books, 2019), 8. 

[3] L. Likavčan, Introduction to Comparative Planetology (Moscow: Strelka Press; 2019), 11. 

[4] J. Confavreux, “Long Read | Africa: Strength in reserve for Earth” (online), New Frame, 2020 (accessed 26 November 2021). Available from: https://www.newframe.com/long-read-africa-strength-in-reserve-for-earth.

[5] A. Mbembe, Theory in Crisis Seminar “Notes on Brutalism” (online), 2020 (accessed 22 November 2021). Available from: https://www.youtube.com/watch?v=tc34afvyL68.

[6] U. Haque, “Big Bang Data: Who Controls Our Data?” (online), Somerset House, 2016 (accessed 25 November 2021). Available from: https://www.mixcloud.com/SomersetHouse/big-bang-data-who-controls-our-data-usman-haque-debates-the-implications-of-the-data-explosion.

[7] S. Bangstad, T.T. Nilsen, A. Eliseeva, “Thoughts on the planetary: An interview with Achille Mbembe” (online) New Frame. 2019 (accessed 26 November 2021). Available from: https://www.newframe.com/thoughts-on-the-planetary-an-interview-with-achille-mbembe.

[8] A. Mbembe, Necropolitics (Durham: Duke University Press, 2019), 97. 

[9] E. Dixon-Román, “Algo-Ritmo: More-Than-Human Performative Acts and the Racializing Assemblages of Algorithmic Architectures”, Cultural Studies Critical Methodologies, 2016, 16 (5), 482-490. DOI: https://doi.org/10.1177/1532708616655769.

[10] A. Mbembe, Theory in Crisis Seminar “Notes on Brutalism” (online), 2020 (accessed 22 November 2021). Available from: https://www.youtube.com/watch?v=tc34afvyL68.

[11] U. Haque, “Big Bang Data: Who Controls Our Data?” (online), Somerset House, 2016 (accessed 25 November 2021). Available from: https://www.mixcloud.com/SomersetHouse/big-bang-data-who-controls-our-data-usman-haque-debates-the-implications-of-the-data-explosion.

[12] R. Braidotti, “Posthuman Knowledge” (online), Harvard GSD, 2019 (accessed 24 November 2021). Available from: https://www.youtube.com/watch?v=0CewnVzOg5w.

[13] R. Amaro “Data Then and Now” (online), University of Washington, 2021 (accessed 29 November 2021). Available from: https://www.youtube.com/watch?v=uEX8JI6Xntk

[14] Ibid. 

[15] P. Preciado, Pornotopia (Zone Books, 2014). 

[16] G. Agamben, “What Is an Apparatus?” and Other Essays (Stanford University Press, 2009). 

[17] S. Lee, “Architecture in the Age of Apparatus-Centric Culture” (online) TU Delft, 2014 (accessed 2 February 2022). Available from: https://repository.tudelft.nl/islandora/object/uuid:fa31ddf9-a227-48e8-a3eb-1f5ca7e39010/datastream/OBJ1/download.

[18] M. Lawrence, “Control under surveillance capitalism: from Bentham’s panopticon to Zuckerberg’s ‘Like’” (online), Political Economy Research Centre, 2018 (accessed 29 January 2022). Available from: https://www.perc.org.uk/project_posts/control-surveillance-capitalism-benthams-panopticon-zuckerbergs-like.

[19] S. Mathews, “The Fun Palace as Virtual Architecture” (online), Journal of Architectural Education, 2006, 59 (3), (accessed 8 February 2022), 39-48, 40. 

[20] Ibid, 40. 

[21] Ibid, 44. 

[22] U. Haque, “The Architectural Relevance of Gordon Pask”, Architectural Design, 2007, 77 (4), 54-61, 58. Available from: https://www.haque.co.uk/papers/architectural_relevance_of_gordon_pask.pdf.

[23] Ibid, 60. 

[24] Ibid, 55. 

[25] S. Mathews, “The Fun Palace as Virtual Architecture” (online), Journal of Architectural Education, 2006, 59 (3), (accessed 8 February 2022), 39-48, 45. 

[26] G. Bell, M. Gould, B. Martin, A. McLennan, E. O’Brien, “Do more data equal more truth? Toward a cybernetic approach to data,” Australian Journal of Social Issues, 2021, 56 (2), 213-222, 219. 

[27] U. Haque, “The Architectural Relevance of Gordon Pask”, Architectural Design, 2007, 77 (4), 54-61. Available from: https://www.haque.co.uk/papers/architectural_relevance_of_gordon_pask.pdf.

Suggest a Tag for this Article
Figure 1 – Perspective image of an isolated agropalace implanted on a flooded topography. Image: Alejandro Eliseo Cibello, Sofia Giayetto, Ornella Martinelli, Pedro Rovasio and Candela Valcarcel, School of Architecture and Urban Studies, UTDT, 2022.
Figure 1 – Perspective image of an isolated agropalace implanted on a flooded topography. Image: Alejandro Eliseo Cibello, Sofia Giayetto, Ornella Martinelli, Pedro Rovasio and Candela Valcarcel, School of Architecture and Urban Studies, UTDT, 2022.
Biomatic Agropalaces: Overflowing Vermiform Artefacts
Artifices, Biomatic, Ecological Fiction, Post-Anthropocentric, Vermiform
Sofia Giayetto, Alejandro Eliseo Cibello, Ornella Martinelli, Pedro Ariel Rovasio Aguirre, Candela Valcarcel

sofigiayetto@gmail.com
Add to Issue
Read Article: 3595 Words

At present, we find ourselves in a critical instance: the current rate of food production is impossible to maintain in the face of the climate threat and new forms of social organisation have not yet been implemented to solve the problem. This project constitutes a possible response to the conditions we will inevitably soon be facing if we do not develop sustainable ways of life that promote coexistence between species. 

The construction of a new paradigm requires the elimination of current divisions between the concepts of “natural” and “artificial”,[1] and consequently the differentiation of the human from the rest of the planet’s inhabitants. This post-anthropocentric vision will build a new substratum to occupy which will promote the generation of an autarchic ecology based on the coexistence between living and non-living entities. 

The thesis extends through three scales. The morphology adopted in each scale is determined by three parameters simultaneously. First, climate control through water performance; second, the material search for spaces that allow coexistence; and lastly, the historical symbolism to which the basilica typology refers. 

On a territorial scale, the project consists of the generation of an artificial floodable territory occupied by vermiform palaces which are organised in an a-hierarchical manner as a closed system and take the form of an archipelago. 

On the palatial scale, water is manipulated to generate a humidity control system that enables the recreation of different biomes inside the palaces through the permeability of their envelope. 

Finally, on a smaller scale, the architecture becomes more organic and flexible, folding in on itself to constitute the functional units of the palaces, which aim for agricultural production, housing needs and leisure; the function of each unit depends on its relationship with water and its need to allow passage and retain it. 

The entire project takes form from, on the one hand, the climatic situations that each palace requires to house its specific biome, and, on the other hand, the spatial characteristics required by the protocols that are executed in it. To allow the development of a new kind of ecology, the architecture that houses the new protocols of coexistence will be: agropalatial, a-hierarchical, sequential, stereotomic, and overflowing. 

In the following chapters, we will develop in depth the architectural qualities mentioned above. 

Post-Anthropocentric Ecologies: Theoretical Framework

We are currently living in the era of the Anthropocene,[2] in which humans are considered a global geophysical force. Human action has transformed the geological composition of the Earth, producing a higher concentration of carbon dioxide and, therefore, global warming. This process began with the first Industrial Revolution, although it was only after 1945 that the Great Acceleration occurred, ensuring our planet’s course towards a less biologically diverse, much warmer and more volatile state. The large-scale physical transformations produced in the environment through extractive practices have blurred the boundaries between the “natural” and the “artificial”. 

In Ecology Without Nature,[3] Morton raises the need to create ecologies that dismiss the romantic idea of ​​nature as something not yet sullied by human intervention – out of reach today – and go beyond a simple concern for the state of the planet, strengthening the existing relationships between humans and non-humans.

In this line of thought, we reject the concept of “nature” and consider its ecological characteristics to be reproducible through the climatic intelligence of greenhouses. These ecologies should be based on a principle of coexistence that not only allows but celebrates diversity and the full range of feelings and sensibilities that it evokes. 

According to Bernard Tschumi,[4] the relationship between the activities and the shape of the building can be one of reciprocity, indifference, or conflict. The type of relationship is what determines the architecture. In this thesis, morphology is at the service of water performance, hence why the activities that take place inside the agropalaces must redefine their protocols accordingly. 

Agropalatial Attribute

Palaces are large institutional buildings in which power resides. Their formal particularities have varied over time. However, some elements remain constant and can be defined as intrinsic to the concept of a palace, such as its large scale, the number of rooms, the variety of activities which it houses and the ostentation of luxury and wealth. 

In the historical study of palaces, we recognised the impossibility of defining them through a specific typology. This is because their architecture was inherited from temples, whose different shapes are linked to how worship and ceremonies are performed. It is, therefore, possible to deduce that if there are changes in the behaviour of believers, this will generate new architectural needs. 

In the same way that architecture as a discipline has the potential to control how we carry out activities based on the qualities of the space in which they take place, our behaviours also have the power to transform space since cultural protocols configure the abstract medium on which organisations are designed and standards of normality are set up.[5] The more generic and flexible these spaces are, the longer they will last and the more resilient they will be.  

The agropalace carries out a transmutation of power through which it frees itself from the human being as the centre and takes all the entities of the ecosystem as sovereign, understanding cohabitation as the central condition for the survival of the planet and human beings as a species. 

The greenhouse typology appears as an architectural solution capable of regulating the climatic conditions in those places where there was a need to cultivate but where the climate was not entirely suitable. Agropalaces can not only incorporate productive spaces but generate entire ecosystems, becoming an architecture for the non-human. 

We take as a reference the Crystal Palace. The Crystal Palace was designed for the London International Exhibition in 1851 by Joseph Paxton. The internal differentiation of its structural module, the height and the shape of its roof generate architectural conditions that shape it as a humidity-controlling container, which allows us to use it as the basis of our agropalatial prototype. 

Our prototype based on the Crystal Palace is designed at first as a sequence of cross-sections. Their variables are the width and height of the section, the height and width of the central nave, the slope of the roof, the number of vaults, an infrastructural channel that transports water and, finally, the encounter with the floor. Each of these variables contributes to regulating the amount of water that each biome requires.

A-hierarchical Attribute 

The territorial organisation of the agropalaces must be a-hierarchical for coexistence to take place. Cooperation between agropalaces is required for the system to function. This cooperation is based on water exchange from one palace to the other. For this to occur, vermiform palaces must be in a topography prone to flooding, organised in the form of an archipelago. 

The prototype project is located in the Baix Llobregat Agrarian Park in Barcelona, which is crossed by the Llobregat river ending up in a delta in the Mediterranean Sea. The Agrarian Park currently grows food to supply to all the neighbouring cities. Our main interest in the site lies in its hydrographic network which is fundamental in the construction of the archipelago since the position of each agropalace depends on its distance to its closest water source.  

To create a humidity map to determine the location of the palaces on the territory we use a generative adversarial network (GAN). A GAN is a type of AI in which systems can make correlations between variables, classify data and detect differences and errors between them through the study of algorithms. Their performance is improved as they are supplied with more data. 

The GAN is trained with a dataset of 6000 images, each of them containing 4 channels of information in the form of coloured zones.[6] Each channel represents the humidity of a specific biome. The position of the coloured zones is related to the distance to the water sources that each biome requires. The GAN analyses every pixel of the images to learn the patterns of the position of the channels and to create new possible location maps with emerging hybridisation between biomes. 

The first four biomes are ocean, rainforest, tundra, and desert. Our choice for these extreme ecologies is related to the impact that global warming will have on them and the hypothesis that their hybridisation will produce less hostile and more habitable areas.  

We conclude that the hybridisation made by AI is irreplaceable by human methods. As such, we consider AI part of the group of authors, even though a later curation of its production is carried out, constituting a post-anthropocentric thesis from its conception. 

Figure 2. Matrix of the outputs of each one of the main biomes and its complete result. Image: Alejandro Eliseo Cibello, Sofía Giayetto, Ornella Martinelli, Pedro Rovasio and Candela Valcarcel, School of Architecture and Urban Studies, UTDT, 2022.
Figure 2 – Matrix of GAN outputs. Left: Four images per channel; from left to right and from top to bottom: Ocean, Rainforest, Tundra and Desert. Right: Four outputs of complete humidity maps with their nine emerging biomes. Image: Alejandro Eliseo Cibello, Sofia Giayetto, Ornella Martinelli, Pedro Rovasio and Candela Valcarcel, School of Architecture and Urban Studies, UTDT, 2022. 

Due to the hybridisation, a gradient of nine biomes and their zones within the territory are recognised in the GAN outputs. These are, from wettest to driest: ocean, wetland, yunga, rainforest, forest, tundra, grassland, steppe, and desert. The wetter palaces will always be located at a shorter distance from the water supply points while the drier ones will be located closer to the transit networks. The GAN not only expands the range of a variety of biomes but also gives us unexpected organisations without losing respect for the rules previously established.  

The chosen image is used as a floor plan and allows us to define the palatial limits, which are denoted by changes in colour.  

The territory, initially flat, must become a differentiated topography so that the difference in the heights of the palaces eases access to water for those that require greater humidity. 

Figure 3 – Construction of the differentiated field of palaces based on the AI results. From top to bottom: Definition of zones of each biome. Generation of axis inside each boundary. Location of cross-sections from the agropalatial prototype. Extrusion of cross-sections forming the outer envelope of each agropalace. Image: Alejandro Eliseo Cibello, Sofia Giayetto, Ornella Martinelli, Pedro Rovasio and Candela Valcarcel, School of Architecture and Urban Studies, UTDT, 2022. 

The palaces are linear, but they contort to occupy their place without interrupting the adjoining palaces, following the central axis of the zone granted by the GAN.  

This territorial organisation, a-hierarchical, longitudinal and twisted, forms two types of circulations: one aquatic and one dry. The aquatic palaces tend to form closed circuits, without specific arrival points. An idle circulation, unstructured, designed to admire the resulting landscape of canyons. The other, arid, runs through desertic palaces along its axis and joins the existing motorways in the Llobregat, crossing the Oasis. 

Stereotomic Attribute 

The protocols of the post-Anthropocene must exist in a stereotomic architecture, a vast and massive territory, almost undifferentiated from the ground. 

As mentioned above, our agropalatial prototype is designed as a sequence of cross-sections. Each section constitutes an envelope which formal characteristics are based on that of the Crystal Palace and modified concerning its need to hold water. 

The determination of the interior spaces in each section depends on the fluxes of humidity necessary for generating the biome. The functional spaces are the result of the remaining space between the steam columns, the number of points where condensed water overflows towards the vaults, and the size of the central circulation channel.  

The variation in organisation according to the needs of each biome creates different amounts of functional spaces, of different sizes and shapes, allowing the protocols to take place inside of them.  

The interstices where the fluxes of humidity move are organised in such a way that the forces that travel through the surfaces of the functional spaces between them reach the ground on the sides of the palace, forming a system of structural frames.  

Sequential Attribute  

The functional spaces in each cross-section are classified into three categories corresponding to the main protocols that take place inside of the agropalaces: production, housing and leisure. 

The classification depends on the size, shape, distance to light and water of each functional space, predicting which one would be more convenient to house each protocol. Every cross-section contains at least one functional space of each kind. 

These two-dimensional spaces are extruded, generating the “permanent” spaces, in which the activities are carried out. These form connections with the “permanent” spaces of the same category of the subsequent cross-section, forming “passage” spaces.  

Thus, three unique, long, complex spaces – one for each protocol – run longitudinally through the palaces, in which activities are carried out in an interconnected and dynamic way. The conservation protocol – the biome itself – is the only non-sequential activity, since it is carried out in the interstice between the exterior envelope of the agropalace and the interior spaces. 

Figure 4. Section. Image: Alejandro Eliseo Cibello, Sofía Giayetto, Ornella Martinelli, Pedro Rovasio and Candela Valcarcel, School of Architecture and Urban Studies, UTDT, 2022.
Figure 4 – Left: Longitudinal Section of an Agropalace that holds a Tundra biome. Right: Variations of the cross-sections–in pink: humidity fluxes. Image: Alejandro Eliseo Cibello, Sofia Giayetto, Ornella Martinelli, Pedro Rovasio and Candela Valcarcel, School of Architecture and Urban Studies, UTDT, 2022. 

Protocols

The need for production has made cities and agricultural areas hyper-specialised devices, making their differences practically irreconcilable. However, we understand that this system is obsolete, which is why it is necessary to emphasise their deep connection and how indispensable they are to each other.  

For this reason, agropalaces work through the articulation of different scales and programs, considering the three key pillars on which we must rely to build a new post-anthropocentric way of life – ecological conservation, agricultural production and human occupation – the latter prioritising leisure. 

Protocol of Production 

From currently available methods, we take hydroponic agriculture as the main means of production, together with aeroponic agriculture since both replace the terrestrial substrate with water rich in minerals. 

The architectural organisation that shapes the agricultural protocol in the project is based on a central atrium that allows the water of the biome to condense and be redirected to the floodable platforms that surround it. In each biome, the density of the stalls, their depth, and the size of the central atrium vary in a linear gradient, ranging from algae and rice plantations to soybeans and fruit. The agricultural protocol in the agropalaces manages water passively, by surface condensation and gravity, generating a spiral distribution added to a central circulation that generates landscape while seeking to cultivate efficiently.

Figure 5. Interior Sections. Image: Alejandro Eliseo Cibello, Sofía Giayetto, Ornella Martinelli, Pedro Rovasio and Candela Valcarcel, School of Architecture and Urban Studies, UTDT, 2022.
Figure 5 – Diagrams and sections of functional spaces and their protocols in each biome. Image: Alejandro Eliseo Cibello, Sofia Giayetto, Ornella Martinelli, Pedro Rovasio and Candela Valcarcel, School of Architecture and Urban Studies, UTDT, 2022. 

Protocol of Housing 

In defining the needs for a House, Banham reduces it to an atmospheric situation, with no regard for its form.[7] However, the dispossession of formal conditions allows us to modify the current housing protocol, through the ability to project a house whose shape is the result of passive climatic manipulation and the need to generate a variety of spatial organisations that do not restrict the type of social nuclei. 

The spatial organisation of the house in the project is built through circulatory axes and rooms. The position of the circulatory axes and the number and size of the rooms vary depending on the biome, this time not based on humidity, but on the type of life that each ecological environment encourages. The height and width of the spaces also vary, generating the collision of rooms and thus allowing the formation of larger spaces or meta-rooms. The protocol of habitation in the agropalaces then allows a wide range of variation in which people are free to choose the form in which they wish to live, temporarily or permanently, individually or in groups. 

Protocol of Leisure

Leisure is one of the essential activities of the post-Anthropocene because it frees human beings from their proletarian condition, characteristic of current capitalism, and connects them with the enjoyment of themselves and their surroundings. The leisure protocol in the thesis consists of a series of slabs with variable depths that constitute pools at different levels, interconnected by slides, which are to varying degrees twisted or straight, steep or shallow, and covered or uncovered. 

The leisure protocol is based on the behaviour of water, which varies in each biome. The quantity, depth and position of the pools decrease in quantity the more desertic the biome that houses it is. In this way, water parks and dry staggered spaces are generated in which all kinds of games and sports are developed. In the agropalaces, contrary to being relegated to specific times and places, leisure becomes a form of existence itself.  

Overflowing Attribute 

Finally, to achieve coexistence, the architecture developed must be permeable.  All the layers that contribute to the complexity of the project exchange fluids – mainly water – with the environment. 

Water penetrates each of them, they use it to generate the desired ambient humidity for their biome and the excess then overflows on the roof. The system works sequentially, from the wettest to the driest biomes. Once the former palace overflows its residual water, the succeeding one can use it to its advantage until it eventually overflows again.  

Inside every palace, a sequence of overflows on an intra-palatial scale is generated. Humidity enters the agropalace through its internal channel, where it evaporates and rises until it condenses on the surfaces of the functional organs and thus penetrates them to be used in different activities. The residuary water evaporates again until it overflows. The process consists of a cyclical system with constant recirculation. 

The functional spaces’ envelopes have perforations in different sizes and positions to allow moisture to dissipate or condense as convenient. The overflowing quality of the system creates communication between the different scales of the architectural system, thus generating inter- and intra-palatial dependency. 

Figure 6. Water Performance Section. Image: Alejandro Eliseo Cibello, Sofía Giayetto, Ornella Martinelli, Pedro Rovasio and Candela Valcarcel, School of Architecture and Urban Studies, UTDT, 2022.
Figure 6 – Detail section of water performance in the agricultural protocol. Image: Alejandro Eliseo Cibello, Sofia Giayetto, Ornella Martinelli, Pedro Rovasio and Candela Valcarcel, School of Architecture and Urban Studies, UTDT, 2022. 

Post-Anthropocentric Architecture: Conclusion

The agropalace understands coexistence as a necessary condition for the survival of the planet and human beings as a species. This new typology presents agriculture as the principal tool of empowerment and suggests a paradigm shift in which each society can define its policies for food production, distribution and consumption; meanwhile, it produces ecosystemic habitats with specific microclimatic qualities that allow the free development of all kinds of entities. 

Biomatic Artefacts proposes an architecture whose forms do not interrupt the geological substrate but compose it, being part of the planetary ecology and simultaneously forming smaller-scale ecosystems within each palace and an autonomous ecosystem. 

The protocols of today disappear to make room for the formation of a single para-protocol, since, contrary to being carried out in a single, invariable way, it exists because it has the quality of always being different, vast in spatial, temporal, and atmospheric variations. And in its wake, it generates a landscape of canyons and palaces that, in the interplay of reflections and translucency of water and glass, allows us to glimpse the ecological chaos of coexistence within. 

We consider that the project lays the foundations for a continuation of ideas on agropalatial architecture and post-anthropocentric architecture, from which all kinds of new formal and material realities will come about. 

Figure 7. Axonometric. Image: Alejandro Eliseo Cibello, Sofía Giayetto, Ornella Martinelli, Pedro Rovasio and Candela Valcarcel, School of Architecture and Urban Studies, UTDT, 2022.
Figure 7 – Perspective image of a group of agropalaces placed in the flooded topography, forming an archipelago. Image: Alejandro Eliseo Cibello, Sofia Giayetto, Ornella Martinelli, Pedro Rovasio and Candela Valcarcel, School of Architecture and Urban Studies, UTDT, 2022. 

Acknowledgement

The following paper was developed within the institutional framework of the School of Architecture and Urban Studies of Torcuato Di Tella University as a project thesis, with Lluis Ortega as full-time professor and Ciro Najle as thesis director.

References

[1] T. Morton, Hyperobjects: Philosophy and Ecology after the End of the World (Minnesota, USA: University of Minnesota Press, 2013). 

[2] W. Steffen, P. Crutzen, J. McNeill, “The Anthropocene: Are Humans Now Overwhelming the Great Forces of Nature?”, AMBIO: A Journal of the Human Environment (2007), pp 614-621. 

[3] T. Morton, Ecology Without Nature: Rethinking Environmental Aesthetics (Cambridge, USA: Harvard University Press, 2007). 

[4] A. Reeser Lawrence, A. Schafer, “2 Architects, 10 Questions On Program, Rem Koolhaas + Bernard Tschumi” Praxis 8 (2010). 

[5] C. Najle, The Generic Sublime (Barcelona, España: Actar, 2016). 

[6] Set of base images with which the GAN trains by identifying patterns and thus learning their behaviours. In our case, the dataset is based on a set of possible biome location maps based on proximity to water sources and highways. 

[7] R. Banham, F. Dallagret, “A Home Is Not a House”, Art in America, volumen 2 (1965) pp 70-79.  

Suggest a Tag for this Article
Figure 10: Emotional Dynamics (Xuanbei He, Zixi Li, Shan Lu), The Bartlett School of Architecture, B-Pro MArch UD, Research Cluster 15 2020-21 (Tutors: Annarita Papeschi, Alican Inal, Ilaria Di Carlo, Vincent Novak).
Figure 10: Emotional Dynamics (Xuanbei He, Zixi Li, Shan Lu), The Bartlett School of Architecture, B-Pro MArch UD, Research Cluster 15 2020-21 (Tutors: Annarita Papeschi, Alican Inal, Ilaria Di Carlo, Vincent Novak).
Towards a Pervasive Affectual Urbanism
Aesthetics, Affect Theory, Automated Cognition, Collective Authorship, Ecosophy
Ilaria Di Carlo, Annarita Papeschi

ilaria.dicarlo@ucl.ac.uk
Add to Issue
Read Article: 3799 Words

Interspecies Encounters and Performative Assemblages of Contamination

Our inner mental ecology has been known to be fundamental for the meaningful and complete success of the notion of ecology.[1] Further demonstrated by the neurosciences, we have now assimilated the notion that we first empathise emotionally and physiologically with what surrounds us in a precognitive phase and only at a later time do we understand consciously the source of our aesthetic experience and, cognitively, its reason and meaning.[2]

In order to investigate the concept of digital and material contaminations as a new way to conceptualise democratic design processes as modes of appropriation and negotiation of space, we have chosen to venture into the epistemological ecotone between aesthetics and cognition, examined through the concept of affect. It is within affects, in fact, that creativity emerges through perception and a cognitive approach to change and social action, “bridging aesthetics and political domain” through a series of encounters between different ecologies and their becoming.[3]

What the affect theory speculates is that our “life potential comes from the way we can connect with others”, from our connectedness and its intensity, to the point that the ability itself to connect with others could be out of our direct control.[4] It is a question of affective attunement, an emergent experience that becomes proto-political,[5] and as any experience that works through instantaneous assessments of affect it becomes also strongly connected with notions of aesthetics and cognition.[6] The paper examines how both aesthetics and cognition could be the instantiators of a change of paradigm within affectual and post-humanist approaches to the design of our cities and territories.

Figure 1 – “Ecognosis” (Kehan Cheng, Divya Patel, Hui Tan), The Bartlett School of Architecture, B-Pro MArch UD, Research Cluster 15 2020-21 (Tutors: Annarita Papeschi, Alican Inal, Ilaria Di Carlo, Vincent Novak).

THE DIMENSIONS OF POST-HUMANIST AESTHETICS

Aesthetics can be defined according to its field of reference in slightly different ways: in neuroscience, aesthetics is the neural manifestation of a process articulated into sensations, meaning and emotions;[7] in evolutionary biology, aesthetics is an adaptive system to environmental stimuli;[8] in an ecological discourse, aesthetics is capacity to respond to the patterns which connect;[9] in philosophy and specifically in the context of Object-Oriented Ontology, aesthetics is the root of all philosophy.[10] Above all, regardless of the framework of reference, aesthetics fundamentally represents a form of knowledge, and as such, it is a very powerful and uncanny conceptual device.

The choice to connect the topic of ecology with aesthetics is not only related to the idea that aesthetics is primarily a form of knowledge and because “any ecologic discourse must be aesthetic as well as ethical in order to be meaningful”,[11] but also because aesthetics has the power to attract affects and to convey difficult or ambiguous concepts, like those feelings of ambivalence that often come along with the ecological debate. As Morton states, the aesthetic experience “provides a model for the kind of coexistence ecological ethics and politics wants to achieve between humans and nonhumans […] as if you could somehow feel that un-feelability, in the aesthetic experience”.[12] As a form of semiotic and experiential information exchange, the aesthetic experience is our primary source of genuine human understanding.

Neuroscientist Damasio demonstrates through a compelling series of scientific studies how emotions are essential to rational thinking and social behaviour.[13] In addition, the embodied simulation theory teaches us that in a precognitive phase we first empathise emotionally and physiologically with what surrounds us and only at a later stage understand consciously the source of our aesthetic experience and, cognitively, its reason and meaning.

“Our capacity to understand the others and what the others materially accomplish does not depend exclusively on theoretical-linguistic competences, but it strongly depends on our socio-relational nature, of which corporeity constitutes the deepest and not further reducible structure. … In this sense, the aesthetic experience is a process on multiple levels which exceeds a purely visual analysis and leans on the visceral-motor and somatomotor resonation of whoever experiences it.”[14]

In other words, the theory speculates that the same neural structures involved in our bodily experiences, our sensing, contribute to the conceptualisation of what we observe in the world around us.

Aesthetics, however, is no competence nor ability nor property exclusive to human nature, it only depends on the different sensing apparatus of each agency – or on what the proto-ecologist von Uexküll defined as the Umwelt, a specific model of the world corresponding to a given creature’s sensorium.[15] Being aware of this aesthetic “perceptual reciprocity”,[16] of this condition of mutual affects towards the environment, opens up new perspectives of solidarities where multiple agencies, each one living through multiple temporalities and with their own “way of worlding”,[17] participate in the remaking of the planet through their patterns of growth and reproduction, their polyarchic assemblages, their territories of action and their landscapes of affects. In fact, we need to acknowledge that the environment is constituted by an ecology of different forms of intelligence where humans are just one form of biochemical intensity.[18]

This expanded notion of agency is further enriched by Bennett’s vital materialism, which by ascribing to non-living systems their own trajectories and potentials, defines a multidimensional gradient that includes not only human and biological intelligences, but the natural and the artificial, raw matter and machinic intelligence, revealing opportunities of intersection, contamination, and collaboration.[19] Her thought is about the need to recognise the vital agency of matter “as the alien quality of our own flesh”,[20] and a part of that “Parliament of Things” or “Vascularised Collective” mentioned by Latour in his Actor Network Theory.[21]

This radical understanding of agency as a confederation of human and nonhuman elements, biological and artificial entities, leads to some critical questions regarding equality, accountability and moral responsibility. As a form of rhizomatic Animism,[22] it aims to reclaim and honour the mesh of connections and “assemblages that generate metamorphic transformation in the capacity to affect and be affected – and also to feel, think, and imagine”. And it is this capacity to affect and be affected that once again emerges as the effectual and necessary catalyst for creation and change, as affects are implicated in all modes of experience as a dimension of becoming. They are located in a non-conscious “zone of indistinction” between action and thought, and they fully participate in cognitive processes.[23]

This is a pervasive process that affects all scales of being singular and choral, from the mesoscale of large planetary processes down to the nano-mechanisms of molecular self-organisation, entailing a new worldly disposition towards the nature of being collective. And it’s precisely because of the trans-scalar and concurrent effects that this extended notion of agency produces while processing new interpretations and understandings of the world that, when considering its impact on ideas of the negotiation and democratisation of space, we should interrogate not only the larger mechanisms of collective sense and decision making, but the very processes of cognition, communication, and information exchange at its basis.

Figures 2–4 – “Civic Sensorium” (Songlun He, Dhruval Shah, Qirui Wang), The Bartlett School of Architecture, B-Pro MArch UD, Research Cluster 15 2020-21 (Tutors: Annarita Papeschi, Alican Inal, Ilaria Di Carlo, Vincent Novak).

PERFORMING THE MANY VOICES

In recent publications, Hayles describes the idea of a cognitive non-conscious as the possibility for complex systems to perform functions that “if performed by conscious entities would be unquestionably called cognitive”.[24] Drawing from artificial and biological examples, she further explores a series of complex, adaptive and intention-driven organisations that, performing within the domain of evolutionary dynamics, exhibit cognitive capacities operating at a level that is inaccessible to introspection. Within this context, when considering the relation between human cognition and the cognitive non-conscious, she explains, the human interpretation might enter algorithmic analysis at different stages, in a sort of dialogue that de-facto structures the potential outcomes of a hybrid cognitive process, where part of the interpretation might be outsourced to the cognitive non-conscious, in a process that intimately links the meaning of the information produced to the specific mechanisms and the context of the interpretation, opening multiple new opportunities for the interpretation of ambiguous information.[25]

Indeed, the argument about the potential and the perils of automation for decision-making is as relevant as it is controversial today. Parisi is significantly more critical regarding the current practices of human-machine collaboration, warning of the dangers of granular machine-generated content amplifying existing bias, or worse, being redirected for a purpose not pre-known. “Even if algorithms perform non-conscious intelligence, it does not mean that they act mindlessly”, she argues.[26] Building on Hayles’ argument, she further elaborates that while it is not possible to argue that cognition performed by non-conscious entities is coherent and able to link the past and the present in causal connection, it is possible for non-conscious cognition to expose “temporal lapses that are not immediately accessible to conscious human cognition”. This is a process that sees algorithms not just adapting passively to the data provided but establishing new patterns of meaning to form coevolutionary cognitive infrastructures that, based on the idea of indeterminacy as a model for automated and hybrid cognition, avoid the primary level of feedback based on prescriptive outcomes and incorporate parallelism of learning and processing.[27]

These arguments acquire a particular relevance if further considered in combination with the theory of information expressed by Simondon, which, formulated as an antagonist argument to Shannon’s cybernetic theory of communication, argues that information is never found, but is always expressed through a process of individuation of the system, as the result of the tensions between the realities that compose the system itself, as the very notion that clarifies the modes through which these realities might become a system in the first instance. This is a process that, by drawing on Simondon’s notion of individuation as the process of social becoming that leads to the formation of the collective subject – the transindividual – becomes inherently metastatic as it emerges from the tension between the sensorial abilities of the system and its tropism.[28]

As such, Simondon’s notion of transindividuality constitutes the basis for a radical reimagination of the process of becoming collective and building collective knowledge,[29] and through its intersection with the speculative opportunities inherent in ideas of tropistic material computation, it also offers the potential for an emergent rearticulation of collective sense and decision making, ultimately offering a protocol towards the exploration of the material, technological and aesthetic dimensions of new post-human and pervasive forms of authorship.

Attempting to account for the multidimensional consequences of altering the creative processes as a result of the construction of collective authorship as an inherently transindividual practice, the points made above imply a series of strategies oriented toward the definition of emergent meaning potentially able to capture the weaker voices and signals. This includes a focus on the diverse sensual and affectual experience of the participants, the orientation towards procedural indeterminacy and the exploration of material intelligence.

Furthermore, if we consider them in their intersection with our initial idea of the environment as constituted by an ecology of different forms of intelligence – where the creation of aesthetic assemblages of collaborative agencies is intended as the entangled construction of space, time and value through the symbiosis of different forms of intelligence defined by open-endedness and inclusiveness – these ideas describe a new urban paradigm, where the notion and aesthetic language of single human authorship with intellectual ownership is substituted by the concept of a collective of humans and non-human ecologies that might recover the aesthetics’ real, fundamental meaning, as an ecological category.

It is with the acceptance of these mixtures of interchanges and crossings of energies, that we can finally observe the old notion of quality, as an essential and pure identity related to cathartic categories, giving way to a more diffused and impure version of itself; a definition of quality not so much related to pureness, homogeneity, uniformity and refinement, but rather to a more complex meaning of sophistication by collaboration, contamination and exploitation of multiple resonances and superimpositions.[30]

As Lowenhaupt Tsing advocates, learning to look at multi-species worlds could lead to different types of production-based economies: “Purity is not an option if we want to pursue a meaningful, informed ecological discourse. We must acknowledge that contaminations are a form of liveable or bearable collaborations. ‘Survival’ requires liveable collaborations. Collaboration means working across differences which leads to contamination.”[31]

These domains and agencies searched for across other species, other ecological intensities and other modes of cognition, and reconfigured through computational technology, respond to a different kind of beauty, a filthy one, a revolutionary one, and an ecologic one. One that, as Morton preaches, “must be fringed with some kind of slight disgust … A world of seduction and repulsion rather than authority”.[32]

According to Guattari, such ecosophic aesthetic paradigms, these collective assemblages or abstract machines, working transversally on different levels of existence and collaboration, would organise a reinvention of social and ecological practices, offering opportunities for dialogues among different forms of ecological intensities.[33] They would also instantiate processes that would give back to humanity a sense of responsibility, not just towards the planet and its living beings, but also towards that immaterial component which constitutes consciousness and knowledge. Such a change of perspective in terms of critical agency would inevitably bring along a change in what Jacque Rancière calls the distribution of the sensible – where sensible is understood as “perceptible or appreciable by the senses or by the mind”, in a definition that describes new forms of inclusion and exclusion of the human and non-human collectivity in the process of appropriation of reality.[34] And since access to a different distribution of the sensible is “the political instrument par excellence against monopoly”,[35] we should treasure it for its capacity to allow us, borrowing Thomas Saraceno’s words, “to tune in to the non-human voices that join ours in boundless connectivity canvases, … proposing the rhizomatic web of life, which highlights hybridisms between one species and another and between species and worlds”.[36] This is a process that describes new trajectories for new forms of institutions where we shall consider not just individual democracy, but a democracy extended to other species, talking to us through the language of the machines.

Figures 5–7 – “Ecognosis” (Kehan Cheng, Divya Patel, Hui Tan), The Bartlett School of Architecture, B-Pro MArch UD, Research Cluster 15 2020-21 (Tutors: Annarita Papeschi, Alican Inal, Ilaria Di Carlo, Vincent Novak).

TOWARDS CO-CREATIVE AFFECTUAL PRACTICES

Along these trajectories, when approaching world and space-making strategies, design processes are translated into an “entangled” construction of space, time, value, and resources, which are critically defined by the very processes of their formation. In such a perspective, artificial intelligence has the potential to become the enabler, the instantiator of a new wider democratic process potentially able to disrupt existing power structures, giving a voice to what currently has none: all the non-conscious agencies separate from humankind or its direct will.

This is a new form of authorship which translates the question to the final user, so that the inquiry is not so much what the user wants from the environment but what can the user do for the environment, an idea that reverts the role of the final user from consumer to service provider. Such a form of authorship takes place in a symbiosis of computational and non-computational forms of thinking, as a hybrid of the diverse modes of cognition, resulting in a new type of synthetic ecology: the one that the designer enables.

In such a context, digital design platforms work as co-evolutionary cognitive infrastructures dealing with an amalgamation of different types of resource thinking: the thinking coming from the machines, the thinking coming from human participants, and the one converging from other ecological intensities. This is a type of transindividual subjectivity, that, formed as an ecology of diverse forms of cognition, is choral, decentralised, and inclusive, and has the capacity of being able to transmit tacit or informed knowledge exposing new models of democratic collective decision- and sense-making. In this process, all the participating forms of cognition have the potential to learn from each other and to compose unexpected dialogues and collective knowledge – what we call “interfaces [i/f], physical/virtual devices, a platform, enabling communications among entities of different kinds each one with its own protocol of communication, knowledge, and values”.[37] This is an approach to collective creation that, drawing on alternative ideas of communication and power between the participating agencies, maps the emergence and evolution of patterns of informed feedback, outlining the connections with ideas of learning and performative collaboration between human, synthetic and biological agencies. In the exploration of these new forms of authorship, designers face the challenge of orchestrating a process able to build fruitful associations between machine computation, genuine human understanding, and non-conscious cognitive agencies – a challenge that should be taken as an opportunity to construct open processes of self-reflection and learning.

The resulting Transindividualities, which are digital participatory scholarships to ecological and post-humanist theory, create the potential for the affirmation of novel mediated narratives,[38] which, by challenging the responsibility of authorship, bring along a new definition of the Human and the need to reframe the question of the design of our cities and territories towards a Pervasive Affectual Urbanism, which points toward the urge of new ethos and new aesthetics.

The challenge will be perhaps best approached by objecting to the idea that the designer is exclusively and ultimately responsible for the design process, and by sustaining the hypothesis that the symbiosis between all the different types of ecologies inhabiting the space could welcome all sorts of different agents through a creative process that embraces indeterminacy. It will be about the belief that open-endedness, contamination, interaction, machine learning and genuine human understanding are not so much about consensus, but about layering and celebrating differences to best use all of them as resources toward the participatory project of space-making. It will be about praising quality as sophistication, by acceptance, negotiation, exploitation and rhizomatic contaminations of multiple resonances and superimpositions, where the value of the project will lie in the exchange of information which is not merely exchanged, but used to create again.

Figures 8–10 – “Emotional Dynamics” (Xuanbei He, Zixi Li, Shan Lu), The Bartlett School of Architecture, B-Pro MArch UD, Research Cluster 15 2020-21 (Tutors: Annarita Papeschi, Alican Inal, Ilaria Di Carlo, Vincent Novak).

References

[1] F. Guattari, The Three Ecologies (London: The Athlone Press, 1987).

[2] A. Damasio, Descartes’ Error: Emotion, Reason, and the Human Brain (London: Putnam Pub Group, 1994).

V. Gallese, “Embodied Simulation: from Neurons to phenomenal experience”, in Phenomenology and the Cognitive Sciences 4 (Berlin: Springer, 2005), 23–48.

[3] B. Massumi, The Politics of affect (Cambridge: Polity Press, 2005).

[4] Ibid.

[5] E. Manning, interviewed in B. Massumi, The Politics of affect (Cambridge: Polity Press, 2005), 135.

[6] B. Massumi, The Politics of affect (Cambridge: Polity Press, 2005).

[7] A. Chatterjee, The Aesthetic Brain: How We Evolved to Desire Beauty and Enjoy Art (Oxford: Oxford University Press, 2015).

[8] G. H. Orians, “An Ecological and Evolutionary Approach to Landscape Aesthetics”, in E. C. Penning-Rowsell, D. Lowenthal (Eds.), Landscape Meanings and Values (London: Allen and Unwin) 3–25.

[9] G. Bateson, Steps toward an ecology of mind (London: Wildwood house Limited, 1979).

[10] G. Harman, “Aesthetics as a First Philosophy: Levinas and the non-human”, Naked Punch 2012, http://www.nakedpunch.com/articles/147, accessed 3 Feb. 2020.

[11] F. Guattari, The Three Ecologies (London: The Athlone Press, 1987).

[12] T. Morton, All Art is Ecological (Milton Keynes: Penguin Books, Green Ideas, 2021).

[13] A. Damasio, Descartes’ Error: Emotion, Reason, and the Human Brain (London: Putnam Pub Group, 1994).

[14] V. Gallese, “Embodied Simulation: from Neurons to phenomenal experience”, in Phenomenology and the Cognitive Sciences 4 (Berlin: Springer, 2005), 23–48.

[15] J. Von Uexkull, A Foray into the Worlds of Animals and Humans (Minneapolis: University of Minnesota Press, 2010).

[16] D. Abram, The spell of the sensuous, Perception and language in a more-than-human world (New York: Vintage Books, 1997).

[17] B. Latour, Down to Earth. Politics in the New Climatic Regime (Cambridge, PolityPress, 2018).

[18] I. Di Carlo, “The Aesthetics of Sustainability. Systemic thinking and self-organization in the evolution of cities”, 2016, PhD thesis, University of Trento, IAAC, Barcelona, Spain.

[19] J. Bennett, Vibrant Matter. A political ecology of things (Durham N.C. and London: Duke University Press, 2010).

[20] Ibid.

[21] B. Latour, We have never been modern (Cambridge: Harvard University Press, 1993).

[22] I. Stengers, “Reclaiming Animism”,  e-flux, 2012,  https://www.eflux.com/journal/36/61245/reclaiming-animism/, accessed 10 Oct. 2021.

[23] B. Massumi, Ontopower: War, Power, and the State of Perception (Durham N.C.: Duke University Press, 2015).

[24] K. N. Hayles, “Cognition Everywhere: The Rise of the Cognitive Non-conscious and the Costs of Consciousness”, New Literary History 45, 2, 2014.

[25] Ibid.

[26] L. Parisi, “Reprogramming Decisionism”, e-flux, 2017, https://www.e-flux.com/journal/85/155472/reprogramming-decisionism.

[27] Ibid.

[28] G. Simondon, L’individuazione psichica e collettiva, ed. and transl. P. Virno, (Rome: DeriveApprodi, 2001).

[29] A. Papeschi, “Transindividual Urbanism: Novel territories of digital participatory practice”, Proceedings from Space and Digital reality: Ideas, representations/applications and fabrications, 2019, 80-90.

[30] I. Di Carlo, “The Aesthetics of Sustainability. Systemic thinking and self-organization in the evolution of cities”, 2016, PhD thesis, University of Trento, IAAC, Barcelona, Spain.

[31] A. Lowenhaupt Tsing, The mushroom at the end of the world: on the possibility of life in Capitalist Ruins (New Jersey: Princeton Univ. Press, 2017).

[32] T. Morton, All Art is Ecological (Milton Keynes: Penguin Books, Green Ideas, 2021).

[33] F. Guattari, Chaosmosis. An ethico-aesthetic paradigm (Sydney: Power Publications, 1995).

[34] J. Ranciere, The Politics of Aesthetics: Politics and Aesthetics (New York: Continuum, 2014).

[35] Ibid.

[36] T. Saraceno, “Aria”, Catalogue of the exhibition at Palazzo Strozzi Firenze (Venezia: Edizioni Marsilio, 2020).

[37] I. Di Carlo, “The Aesthetics of Sustainability. Systemic thinking and self-organization in the evolution of cities”, 2016, PhD thesis, University of Trento, IAAC, Barcelona, Spain.

[38] A. Papeschi, “Transindividual Urbanism: Novel territories of digital participatory practice”, Proceedings from Space and Digital reality: Ideas, representations/applications and fabrications, 2019, 80-90.

Suggest a Tag for this Article
Bartlett B-Pro, RC1, Gaming Consensus, 2021
Bartlett B-Pro, RC1, Gaming Consensus, 2021
B-Pro Open Seminar: Climate F(r)ictions
B-Pro Open Seminar, Climate F(r)ictions
Provides Ng

provides.ng.19@ucl.ac.uk
Add to Issue
Read Article: 647 Words

27 April 2022, 3:00 pm–5:30 pm

Déborah López and Hadin Charbel curate a seminar that spotlights authors and practices whose works link technology’s diverse roles across human and non-human scales in the current and future climate regime, exploring the possibility, and perhaps the inevitability, of encoding ethics.

Climate Fr(i)ctions

The effects of climate change have become increasingly apparent with implications across multiple geographical scales and regions. Read as ecological and environmental transformations, accelerated transitional states are unfolding consequences and prompting responses within social, political, economic, human and non-human spheres alike. For instance, the term ‘cli-migration’ was coined by an Alaskan human rights lawyer in 2008 to describe the permanent, forced relocation of communities due to climate change. That same year, Ecuador introduced articles 10 and 71-74 to their constitution that explain the “Rights of Nature” as both a definition and the means to its legal and practical application. 

While climate change can be described as a “hyper-object” (Morton 2013) whose effects are generally conceived to exist at an incomprehensible scale, its causes are grounded in the accumulation of various actions that are linked with the extractivist and capitalist logics resulting in a positive feedback loop – more resource extraction leads to more consumption and vice versa. Architecture is indeed one facet among an ecosystem of production- and consumer-based economies that has inextricably linked resources to commodities. Further to this, the use of territorialising technologies and mediums (such as satellite imagery and land surveys) is now coupled with artificial intelligence such as machine learning, optimisation algorithms and sensory devices, increasing the efficiency of all aspects of the supply chain, from prospecting to extraction and transport. It would seem that technology’s inevitable end is towards colonisation.  

This has in turn drawn the attention of some to investigate alternative modes of land and resource management. Meanwhile, contemporary trends in circular economies have begun questioning and testing the viability of re-utilising materials and rethinking logistical processes. Parallel to this, relatively recent technological trends that are predicated on decentralised protocols such as blockchain inherently possess political ideologies whilst exhibiting practical implications. Although technology tends to be presented as generic, the aforementioned hints at the possibility, and perhaps the inevitability, of encoding ethics.  

This session will feature the following speakers for presentations followed by roundtable discussion: 

Bradley Cantrell and Marantha Dawkins, University of Virginia

Theodore Dounas, Robert Gordon University and Adventurous Architecture

Catherine Griffiths, University of Michigan and Isohale

Damjan Jovanovic, Sci-Arc and lifeforms.io

Andrew Witt, Harvard University and Certain Measures 

About the curators 

Deborah Lopez and Hadin Charbel are architects and founders of Pareid, an interdisciplinary design and research studio. Their works adopt approaches from various fields and contexts, addressing topics related to climate, ecology, human perception, machine sentience, and their capacity for altering current modes of existence through imminent fictions (IF).  

Deborah completed her second Master’s in Architecture at Obuchi Laboratory at the University of Tokyo as a Monbukagakusho scholar (MEXT) from 2014 to 2018, and received a Bachelor of Arts and Master’s of Architecture from the European University of Madrid. Hadin was awarded the Monbukagakusho scholarship (MEXT) between 2014-2017 and received a Master in Engineering in the Field of Architecture from the University of Tokyo. He received his Architectural Studies BA from UCLA in 2012. Their works have been exhibited at the Venice Biennale, the Seoul Biennale, and Japanese Junction and they have published articles at various conferences such as ACADIA, Technarte, and COCA.  

They are both lecturers at The Bartlett in the B-Pro programme where they run Research Cluster 1 in Architectural Design MArch. The research cluster, titled “Monumental Wastelands”, focuses on cli-migration and autonomous ecologies, using methods of “decoding” and “recoding” through climate fiction (Cli-Fi). 

About The B-Pro Open Seminar 

The B-Pro Open Seminar of Prospectives invites a diverse range of thinkers and practitioners from across the world. The Open Seminar will support the development of articles for Issue 03 of the Prospectives Journal in Summer 2022. 

All B–Pro students and staff are advised and encouraged to attend. 

https://www.ucl.ac.uk/bartlett/architecture/events/2022/apr/b-pro-open-seminar-climate-frictions

Suggest a Tag for this Article
Prospectives Writing Style Guide
24/05/2022
author guidelines, punctuation, referencing, spelling, style guide, writing style
Provides Ng

provides.ng.19@ucl.ac.uk
Add to Issue
Read Article: 80 Words

The purpose of this guide is to help authors ensure consistency with Prospectives Issues.
It includes the most contentious areas of spelling, punctuation and formatting. For more
general guidance on tone and style, please consult the UCL Author Guidelines and Content
Style Guide. Where this guide differs from UCL Author Guidelines or Content Style Guide, please
use this document.
If helpful, you can also consult Issue 1 of Prospectives: https://journal.b-pro.org/issue/issue1

Suggest a Tag for this Article
Solarpunk Building for Terraforma, Alessandro Bava, 2021
Solarpunk Building for Terraforma, Alessandro Bava, 2021
Editorial Note
29/04/2022
B-pro, Editorial Note, Prospectives, Prospectives Issue 02, The Algorithmic form, The Bartlett
Provides Ng

provides.ng.19@ucl.ac.uk
Add to Issue
Read Article: 1663 Words

Welcome to Prospectives Issue 02

It’s been a great pleasure to be part of Prospectives – a journal that is dedicated to all researchers and designers, students and scholars, established or in their early careers. It aims to act as a hotbed, a sandbox, a platform that is “from architects, by architects, to architects” in its broadest sense – be it architects of buildings, softwares, or future(s) (or the Matrix!). It is for all who are invested in interdisciplinary and intercultural exchanges, information and idea seeding. 

According to Oxford Languages, the term “Prospective” emerged in the late 16th century, with a meaning of “looking forward, foresighting”, or “characterised by looking to the future”. The journal’s title puts the anticipatory nature of Prospective(s) into plural form; we believe “design” is the maximising of options or, as Claude Shannon put it “surprises” in a system; and the realisation of design is the collapse / negotiation / collaboration of all such possibilities into our physical reality. When the word “prospect” is translated into other languages, like my mother tongue Chinese, it adds yet another layer of meaning. The first result that Google turned up was “奔頭兒” (rushing-heads), an expression much used by local dialects in the North-East of China to describe the hard work needed to secure a promising future. Different languages and cultures map the vibrancy of Prospectives, and also of architecture and world-building. One is simultaneously enabled and constrained by the language which structures our thinking, be it architectural, mathematical or natural languages; this is why collaboration, or a collaborative intelligence, is our biggest prospect. The greatest innovations are the ones characterised by inclusivity, not exclusivity.  

Within such a context, what is the role of a journal? To ensure standards in research? To network scholars in the field? To communicate progress with the larger public? We have seen an increasing number of open source journals that are revolutionising the peer review system; not to replace it, but diversifying what can be meant by peer-to-peer (p2p). At Prospectives, we are invested in democratisation, especially in helping independent authors and designers reach a larger audience, and making literature available and accessible to all through participation and digitalisation. The future of journals (and architecture), is certainly one that can synthesise the copyrights and “copylefts”. As Prof. Mario Carpo suggests, while the marginal costs of printing (be it 2D or 3D) decrease, our capacities in mass customisation increase, and the same applies to information production. With the rise of the Omniverse, Metaverse, and MetaNets, it becomes increasingly apparent that the answer is not in the technologies themselves, but the way the social and the economic are re-structured, driven by participatory innovation. It will take the invisible (or visible) hands of the many to steer us towards the prospectives we desire.  

Issue 02: The Algorithmic form  

“Algorithm” as the adjective, “form” as the subject – connecting fundamental questions in computation to architecture. The second issue of Prospectives is driven by the provocations of the essay “Computational Tendencies”, written in 2020 by Alessandro Bava – who is also the guest curator of this issue. He problematised evolutionary thinking in architecture – the linear and unidirectional development from simplicity to complexity, from causation to correlation, from small to big data – and questioned the prospects of algorithms and forms within social and cultural urgencies. In the search for answers that are likely to fall between established fields, Alessandro invited six architects to engage in conversation with great figures from the fields of art, architecture and computation. Some of these conversations are carried out through interviews and roundtables, others through research, literature and case studies, forming dialogues between the past and present. Together with this, an open call was established to crowd-source intelligence and outsource imagination. These critical and retrospective pieces map a speculative timeline of events around “algorithmic forms” from Italian Renaissance, through the beginning of modernism, up to today.  

Prospectives Issue 02 encompasses 14 contributions. Prof. Mario Carpo starts our journey with an analogy of the German language, where grammar is “an artificial shortcut” to fluency, not its entirety. The same logic may apply to “Shape Grammar” in architecture, or the Common Data Environments of BIM, or the big-databases of Artificial Intelligence (AI). Just as he exquisitely formed a connection between the invention of book-printing and 3D-printing to predict a future of mass customisation, in this piece Mario shows us a comparative history between citationists of the Renaissance and post-modern (PoMo) architecture. The former is invested in reviving classical antiquity “piece-by-piece”, while the latter took its cues from “reference, allusion, collage and cut-and-paste”. We are also indulged with the distinguished curator Hans Ulrich Obrist’s interview with Getulio Alviani – an important figure in the international Optical-kinetic art movement throughout the 20th century. Alviani spoke of being motivated by the work of Leonardo Da Vinci; his geometric exploration arising from the “curiosity of seeing”; the tectonics between material and structure, craft and design, and finally, the immersivity of movement with the “discovery of light”. This precious and poetic piece teleported us to the Italian art scene through Alviani’s encounters, provoking us to reflect on our journey from simplicity to complexity. 

The five pieces that follow are the outcome of the B-pro Open Seminar at the Bartlett School of Architecture on 8th December, 2021. Five invited guests, including Roberto Bottazzi (The Bartlett), Francesca Gagliardi and Federico Rossi (Fondamenta), Philippe Morel (ENSA Paris-Malaquais & The Bartlett), Marco Vanucci (Open Systems), and myself (Provides Ng, The Bartlett) were invited to contemplate on and discuss the work of Luigi Moretti, Isa Genzken, Manfred Mohr, and Leonardo and Laura Mosso – important figures who had shown us new forms of aesthetics through the exploration of novel technological, geometrical, and mathematical tools. The roundtable that followed included discussions on, but not limited to, topics in Building Information Modelling (BIM), AI, blockchain, robotics, extended reality (XR) and other distributive technologies that, undeniably, should be brought to the table for their symbiosis and socioeconomic implications, positive or negative.  

Lastly, the richness of this issue is further complemented by five selected open call pieces, with topics ranging from architectural authorship, algorithmic representations, digital anthropology, computational empiricism, and the liberation of creativity through codification.  

Acknowledgements

Prospectives hopes to uncover the urgency around issues of computation and automation within the built environment, but also the communities and initiatives that are involved in such developments; from the Bartlett School of Architecture, UCL, reaching out to wider society across disciplinary and territorial borders.  

First and foremost, I owe thanks to Prof. Frederic Migayrou, who is chair of the school, director and founder of the B-pro – five exciting programs led by an international and interdisciplinary team of faculty members, which have shown the field diverse paths to architecture and education, a shelter for all who strive for “prospects”. And to Prof. Mario Carpo, a historian, a critic, a theorist, who has liberated my thinking and shown us a form of architecture that is so much more than design; a form of architect that is so much more than a builder; a form of speculation that is so much more than fiction; a form of prospect that is so much more than futuring. Mario and Frederic were my supervisors, patiently guiding me through a marvellous history of Architecture & Digital Theory; a history that has become a rock in my heart – even though the prospects of the future are not always clear, history has prevented me from confusing and losing myself, and urged me to write and research with honesty, and I hope this journal can do the same for its readers. And of course, Mollie Claypool, a dedicated advocate, a female theorist, my role model. A strong figure with a soft heart, she will always fight and speak up for, in her words, “a labour of love and perseverance”, spearheading participatory and collaborative practises in automation, design and research, and the launch of this very journal. Also Roberto Bottazzi and Gilles Retsin, programme directors of Urban Design (UD) and Architecture Design (AD) in B-pro, together with Mollie, have given me so much opportunity, trust, advice and support, facilitating a free platform of architectural expression and a warm hub of design innovation. Prof. Bob Sheil and Andrew Porter, who have relentlessly endorsed and formalised the development of Prospectives and all other initiatives within the School of Architecture, facilitating a welcoming hotbed for creativity, self-initiation and self-organisation.  

I am thankful to all those who are my colleagues, but also my mentors, including Alessandro Bava, who have curated this issue with much sincerity and commitment, bringing an amazing line up of guests and design provocations to the table; Déborah López Lobat, Hadin Charbel, Manuel Jimenez, Emmanouil Zaroukas, Clara Jaschke, Mark Garcia, Jordi Vivaldi Piera and Albert Brenchat-Aguilar, with whom I’ve had some of the most engaging and interesting disciplinary discussions and who have never hesitated to reach out a helping hand; Daniel Koehler, Valentina Soana, and all Prospectives advisory board members. Above all, Alberto Fernandez Gonzalez and David Doria; my strongest backers, my faithful ear, my collaborative hands, my motivation and my exemplars, it is my honour and blessing to be amongst such fellowship and companionship. Needless to say, we would be nothing without our communication and administration teams, the invisible heroes who have supported the running of the school, especially Drew Pessoa, Tom Mole, Ruth Evison, Gen Williams, Srijana Gurung, Abi Luter, Dragana Krsic, Sarah Barry, Jessica Buckmire, Julia Samuels, and Crystal Tung. Last but not least, Rebecca Sainsot and Dan Wheeler, who assisted the publication and copy editing of this issue with such dedication, and to those who have submitted and contributed to our open call. I am grateful to all schools of architecture, like the Bartlett, that have enabled and facilitated projects such as Prospectives, opportunities for early-career and independent scholars, and a place for aspiring talents to meet and grow.  

Suggest a Tag for this Article
Retrofit Project by Frederik Vandyck, Design Sciences Hub
Retrofit Project by Frederik Vandyck, Design Sciences Hub
Towards the computation of architectural liberty  
architectural liberty, automation, computation, design theory, fragmentation
Sven Verbruggen, Elien Vissers-Similon

sven.verbruggen@uantwerpen.be
Add to Issue
Read Article: 2619 Words

A design process consists of a conventionalised practice – a process of (personal) habits that have proven to be successful – combined with a quest for creative and innovative actions. As tasks within the field of architecture and urban design become more complex, professionals tend to specialise in one of many subsets, such as designing, modelling, engineering, managing, construction, etc. They use digital tools which are developed for these specialised tasks only. Therefore, paradoxically, automation and new algorithms in architecture and urbanism are primarily oriented to simplify tasks within subsets, rather than engaging with the complex challenges the field is facing. This fragmented landscape of digital technologies, together with the lack of proper data, hinders professionals’ and developers’ ability to investigate the full digital potential for architecture and urban design. [1] Today, while designers explore the aid that digital technologies can provide, it is mostly the conventionalised part of practice that is being automated to achieve a more efficient workflow. This position statement argues for a different approach: to overcome fragmentation and discuss the preconditions for truly coping with complexity in design – which is not a visual complexity, nor a complexity of form, but rather a complexity of intentions, performance and engagement, constituted in a large set of parameters. We will substantiate our statement with experience in practice, reflecting on the Retrofit Project: our goal to develop a smart tool that supports the design of energy neutral districts. [2]  

So, can designers break free from the established fragmentation and compute more than technical rationale, regulations and socio-economic constraints? Can they also incorporate intentions of aesthetics, representation, culture and critical intelligence into an architectural algorithm? To do so, the focus of digital tools should shift from efficiency to good architecture. And to compute good architecture, there is a need to codify a designer’s evaluation system: a prescriptive method to navigate a design process by giving value to every design decision. This evaluation system ought to incorporate architectural liberty – and therein lies the biggest challenge: differentiating between where to apply conventionalised design decisions and where (and how) to be creative or inventive. Within a 5000-year-old profession, the permitted liberty for these creative acts has been defined elastically: while some treatises allow only a minimum of liberty for a designing architect, others will lean towards a maximum form of liberty to guarantee good architecture. [3]  

A minor group of early adopters, such as Greg Lynn, Zaha Hadid Architects, and UN Studio, tried to tackle the field’s complexity using upcoming digital technologies, in the late ’90s early 2000s. They conveniently inferred their new style or signature architecture from these computational techniques. This inference, however, causes an instant divide between existing design currents and these avant-garde styles. The latter claim the notion of complexity – the justification for their computational techniques – lies mostly within the subset of form-giving, not covering the complexity of the field. This stylistic path is visible in, for example, Zaha Hadid Architects’ 2006 masterplan for Kartal-Pendik in Istanbul. The design thrives on binary decisions in 3D-modelling tool Maya, where it plays out a maximum of two parameters at once: the building block with inner court and the tower. The resulting plastic urban mesh looks novel and stylistically intriguing, yet produces no real urbanity and contains no intelligence on the level of the building type. This methodology does not generate knowledge on how well the proposed urban quarter (or constituent buildings) will perform on the level of, for example, costs, energy production and consumption, infrastructure, city utilities, diversity and health. The fluid mass still needs all conventional design operations to effectively turn it into a mixture of types, urban functions, and local identity. Arguably, the early adopters’ stylistic path avoided dealing with real complexity and remained close to simple automation. In doing so, while they promoted a digital turn, they might also have dug the foundations for today’s fragmentation in the field.  

Ironically, to some extent Schumacher’s treatise – definitely the parts that promote parametricism as a style – reads as a cover-up of the shortcomings of parametric software; for example, the inability to produce local diversity and typological characteristics beyond formal plasticity. [4] Schumacher further rejects Vitruvius to prevent structural rationale from taking primacy, and he disavows composition, harmony and proportion as outdated variable communication structures to propose the “fluid space” as the new norm. [5] This only makes sense knowing that the alternative – a higher intelligence across the whole field of architecture and urban planning, such as codified data and machine learning algorithms – did not yet exist for the early adopters. Contemporary applications such as Delve or Hypar do make use of those intelligent algorithms, yet prioritise technical and economical parameters (e.g. daylight, density, costs) to market efficiency. [6]  

Any endeavour to overcome the established fragmentation and simplified automation will ultimately find itself struggling with the question of what good architecture is. After all, even with large computational power at hand, the question remains: how to evaluate design decisions beyond the merely personal or functional, in a time where no unified design theory exists? In fact, the fragmented specialisation of today’s professionals has popularised the proclamation of efficiency. As a result, an efficiency driver (whether geared by controlling costs, management or resources) is often disguised as moral behaviour, as if its interest is good architecture first, and the profit and needs of beneficiaries only come second. If the added value of good architecture cannot be defined, the efficiency driver will continue to get the upper hand, eroding the architectural profession to an engineering and construction service providing calculations, permits and execution drawings.  

It was inspiring to encounter Alessandro Bava’s Computational Tendencies on this matter:  

The definition of what constitutes “good” architecture is, in fact, always at the center of architecture discourse, despite never finding a definite answer. Discourses around digital architecture have too often resolved the question of the “good” in architecture by escaping into the realm of taste or artistic judgment. [7] 

Bava renders Serlio’s architectural treatise as an original evaluating system that attributes universal value, and revisits Rossi’s exalted rationalism to propose a merger of architecture’s scientific qualities with its artistic qualities. He aims to re-establish architecture’s habitat-forming abilities and prevent architecture from becoming an amalgam of reduced and fragmented services. However, Serlio’s treatise did not provide a fully codified and closed formal system, as it still includes the liberty of the architect. [8] Going through Serlio’s On Domestic Architecture, an emphasis is placed on ideal building types, mostly without context. Therefore, no consideration is given to how these types ought to be modified when they need to be fitted in less ideal configurations such as non-orthogonal grids. The books also remain ignorant of the exceptions: the corner-piece-type, or fitting-parts that mediate between buildings and squares on a higher level. This is not a cheap critique of Serlio’s work. It is an awareness one needs to have when revisiting Serlio’s work as a “proto-BIM system, one whose core values are not market availability or construction efficiency, but harmonic proportions”. [9] Arguably, it is the liberty, the modifications, and the exceptions that need to be codified, to reach beyond simplified automation, across fragmentation, and towards an architectural algorithm to assist designers. 

This is easier said than done, otherwise the market would be flooded with design technologies by now. As with most design problems, the only way to solve them is by tackling them in practice. In 2021, the Design Sciences Hub, affiliated with the University of Antwerp, set up the Retrofit Project. The aim is to develop an application to test the feasibility of district developments. The solution will show an urban plan with an automatically generated function mix and optimized energetic and ecological footprint, for any given site and context. The project team collaborates with machine-learning experts and environmental engineers for the necessary interdisciplinary execution. Retrofit is currently in the proof-of-concept phase, which focuses on energy neutrality and will tackle urban health and carbon neutrality in the long run. 

The problem of modifications and exceptions seems the easiest to examine, as it primarily translates into a challenge of computational power and coping with a multitude of parameters. However, these algorithms should be smart enough to select a specific range within the necessary modifications and exceptions to comply with the design task at hand. In this case, the algorithm should select the correct modifications and exceptions needed to integrate certain types into any given site within the Retrofit application. In other words, there is a need for an intelligent algorithm that can be fed a large number of types as input data to generate entirely new or appropriate building types. The catch resides within the word “intelligent”, as algorithms aren’t created intelligent, they are trained to reach a certain level of intelligence based on (1) codifiable theory and (2) relevant training sets of data. Inquiring into a variety of evaluation systems for architectural design that emerged over the last 40 years, Verbruggen revealed the impossibility of creating a closed theoretical framework, and uniquely relating this framework to a conventionalised evaluation system in practice. [10] As such, both the codifiable theory – a unified evaluation system that integrates scientific and artistic qualities into one set of rules – and the training set  hardly exist in architecture and urban design. To complicate matters even more, today’s non-unification is itself often embraced as the precondition for good architecture [11-15] 

And so, the liberty question emerges here once again: how can different types, their modifications and exceptions, including respective relationships with different contexts, be codified? It is easy to talk about codification, but much harder to implement it within a project. When different types are inserted into a database, how are the attributes defined? This is a task that proved to be very laborious and raised many new questions in the Retrofit project. Attributes will include shape and size, yet might also include levels of privacy, preferred material usage, degree of openness, average energetic performance, historic and social acceptance in specific areas, compatibility with different functions, etc. Which values define when and where a specific type is appropriate, and how are they weighed? Do architects alone fill up the database, and if so, which architect is qualified, and why? And when an AI application would examine existing typologies within our built environment, which of these examples should be considered good, and why? Can big data or IoT sensors help in data gathering? To truly take everything into account, how much data do we really need (e.g., a structure’s age and condition, social importance, usage, materials, history, etc.). Furthermore, when the Retrofit application runs on an artificially intelligent algorithm that is trained to think beyond the capabilities of a single architect, will the results diverge (too) much from what society is used to? 

The many practical questions from the Retrofit Project show that defining the architect’s liberty is both the problem and holds the potential for digital technologies to tackle the true complexity of the field. Liberty is undeniably linked to the design process and, therefore, encoding a design process needs to (1) capture the architect’s evaluation system and (2) allow for targeted and smart data gathering. The evaluation system can then be coded into an algorithm, with the help of machine learning experts, and trained using the gathered data. Both the evaluation system and the necessary data rely heavily on the architect’s liberty. Because dealing with these liberties is a difficult task – perhaps the most difficult task in the age of digital architecture – many contemporary businesses and start-ups that claim to revolutionise the design process with innovative technologies might not revolutionise anything, because they opt for the easy route and avoid dealing with the liberty aspect. An architectural algorithm that does take the liberty aspect into account may provide designers with an artificial assistant to help tackle all complexities in the field while tapping into the full potential of today’s available computational power. 

This could be the ultimate task we set ourselves at the DSH. Studying a large dataset of design processes, steps, and creative acts might reveal codifiable patterns that could be integrated into a unified and conventionalized evaluation system. This study would target large and diverse groups of designers and users in general, including their knowledge exchange with other involved professionals. Could such an integral evaluation system, combined with data gathering, finally offer the prospect of developing a truly architectural algorithm? Eventually, this too will encounter issues that require further study, such as deciding who to involve and how to wisely navigate between the highs and lows of the wisdom of crowds: [16] can we still trust the emerging patterns detected by machine learning algorithms to constitute proper architectural liberty and, thus, good architecture? We will proceed vigilantly, but we must explore this path to avoid further fragmentation, non-crucial automation, and the propagation of false complexity. 

References

[1] N. Leach, Architecture in the Age of Artificial Intelligence: An Introduction for Architects (London; New York: Bloomsbury Visual Arts, 2021).

[2] The Design Sciences Hub [DSH] is a valorisation team of the Antwerp Valorisation Office. The DSH works closely with IDLab Antwerp for Machine Learning components and with the UAntwerp research group Energy and Materials in Infrastructure and Buildings [EMIB] to study energy neutrality within the Retrofit Project. Although the project will be led and executed by the University of Antwerp, the private industry is involved as well. Four real estate partners – Bopro, Immogra, Quares and Vooruitzicht – are financing and steering this project. So is the Beacon, maximizing the insights from digital technology companies. Also see: https://www.uantwerpen.be/en/projects/project-design-sciences-hub/projects/retrofit/

[3] H.W. Kruft, A History of Architectural Theory: from Vitruvius to the present (London; New York: Zwemmer Princeton Architectural Press, 1994).

[4] P. Schumacher, The Autopoiesis of Architecture: A New Framework for Architecture. Vol. 1 (Chichester: John Wiley & Sons Ltd, 2011). P. Schumacher, The Autopoiesis of Architecture: A New Agenda for Architecture. Vol. 2 (Chichester: John Wiley & Sons Ltd, 2012).

[5] Ibid.

[6] Delve is a product of Sidewalk Labs, founded as Google’s urban innovation lab, becoming an Alphabet company in 2016. Hypar is a building generator application started by former Autodesk and Happold engineer Ian Keough. Also see www.hypar.io, www.sidewalklabs.com/delve.

[7] A. Bava, “Computational Tendencies”, In N. Axel, T. Geisler, N. Hirsch, & A. L. Rezende (Eds.), Exhibition catalogue of the 26th Biennial of Design Ljubljana. Slovenia (2020): e-flux Architecture and BIO26| Common Knowledge.

[8] H.W. Kruft, A History of Architectural Theory: from Vitruvius to the present (London; New York: Zwemmer Princeton Architectural Press, 1994).

[9] A. Bava, “Computational Tendencies”, In N. Axel, T. Geisler, N. Hirsch, & A. L. Rezende (Eds.), Exhibition catalogue of the 26th Biennial of Design Ljubljana. Slovenia (2020): e-flux Architecture and BIO26| Common Knowledge.

[10] S. Verbruggen, The Critical Residue: Creativity and Order in Architectural Design Theories 1972-2012 (2017).

[11] M. Gausa & S. Cros, Operative optimism (Barcelona: Actar, 2005)

[12] W. S. Saunders, The new architectural pragmatism: a Harvard design magazine reader. (Minneapolis: University of Minnesota Press, 2007).

[13] R. Somol & S. Whiting, Notes around the Doppler Effect and Other Moods of Modernism. (2002) In K. Sykes (Ed.), Constructing a New Agenda: Architectural Theory 1993-2009 (1st ed., pp. 188-203). (New York: Princeton Architectural Press).

[14] K. Sykes, Constructing a new agenda : architectural theory 1993-2009.  (1st ed., New York: Princeton Architectural Press, 2010).

[15] S. Whiting, (recorded in Delft, march 2006). The Projective, Judgment and Legibility: Lecture at the Projective Landscape Conference, organized by the TU Delft and the Stylos foundation.

[16] P. Mavrodiev & F. Schweitzer “Enhanced or distorted wisdom of crowds? An agent-based model of opinion formation under social influence”, Swarm Intelligence, 15(1-2), 31-46. doi:10.1007/s11721-021-00189-3 J. Surowiecki, The wisdom of crowds : why the many are smarter than the few. (London: Abacus, 2005).

Suggest a Tag for this Article
Fig. 2 Norman Foster’s sketch illustrates the generative process: each floor is rotated by 5° relative to the one below around the central core with the pillars, bearing the vertical loads, the services, the stairs, and the lifts. From the core, six ‘spokes’ host the floorspace at each level. Each floorspace is detached from the next by a void triangular area about 20° wide. The vertically open areas create light wells for the height of the tower, up to the thirty-second floor. These open areas wound in coils to flow ventilation and natural lighting inside the building.
Fig. 2 Norman Foster’s sketch illustrates the generative process: each floor is rotated by 5° relative to the one below around the central core with the pillars, bearing the vertical loads, the services, the stairs, and the lifts. From the core, six ‘spokes’ host the floorspace at each level. Each floorspace is detached from the next by a void triangular area about 20° wide. The vertically open areas create light wells for the height of the tower, up to the thirty-second floor. These open areas wound in coils to flow ventilation and natural lighting inside the building.
The Architect and the Digital: Are We Entering an Era of Computational Empiricism? 
architectural design theory and practice, case study/studies, design education, design methods, digital design, parametric design
giovanni corbellini, Luca Caneparo

giovanni.corbellini@polito.it
Add to Issue
Read Article: 3879 Words

The close integration of design with computational methods is not just transforming the relationships between architecture and engineering; it also contributes to reshaping modes of knowledge development. This paper critically probes some issues related to this paradigm shift and its consequences on architectural practice and self-awareness, looking at the potential of typical teaching approaches facing the digital revolution. The authors, who teach an architectural design studio together, coming from different backgrounds and research fields, probe the topic according to their respective vantage points. 

Over the last few decades, a design agency has developed of using digital tools for the interactive generation of solutions by dynamically linking analytic and/or synthetic techniques. 

The analytic techniques make use of simulation, of the capability to forecast certain aspects of building performance. While in conventional practice simulation usually plays a consulting role in the later stages of the design process, in the new forms of agency it works as a generative device from the earlier phases. 

The synthetic techniques address, on the other hand, more organic, para-biologic concepts – for instance “emergence, self-organization and form-finding” – looking for “benefits derived from redundancy and differentiation and the capability to sustain multiple simultaneous functions”. [1] 

Structures and their conception stand out as a part of architectural design where the digital impact shows its clearest consequences. Candela, Eiffel, Nervi and Torroja considered for instance that calculations have to go in parallel with intuitive understanding of the form: “The calculation of stresses”, writes Torroja, “can only serve to check and to correct the sizes of the structural members as conceived and proposed by the intuition of the designer”. [2] “In this fundamental phase of design”, Nervi adds, “the complex formulas and calculation methods of higher mathematics do not serve. What are essential, however, are rough evaluations based on simplified formulas, or the ability to break down a complex system into several elementary ones”. [3] At the time, the computational aspects were overridingly cumbersome; Frontón Recoletos required from Torroja one hundred and fifty-eight pages of calculations with approximate methods. Classical analytical procedures provided limited tools for simulation: “It was mandatory for the engineer to supplement his analyses with a great deal of judgment and intuition accumulated over years of experience. Empiricism played a great role in engineering design; while some general theories of mechanical behaviour were available, methods for applying them were still under development, and it was necessary to fall back upon approximation schemes and data taken from numerous tests and experiments”. [4] 

After the epoch of Nervi and Torroja, research and practice have been deeply influenced by the combined actions of computation toward a unifying approach to the different theories in mechanics, thanks to exponential performance improvements in the hardware, as well as achievements in symbolic and matrix languages, and discretization methods (e.g., boundary and finite elements methods) implemented in software. At present, the wide availability of computational methods and tools can produce numerical simulations out of complex forms, with the expectation of providing a certain degree of knowledge and understanding of mechanics, energetics, fluids, and acoustics. The compelling possibilities of boundary or finite element methods, plus finite difference or volume methods, has produced a shift from science of construction pioneers’ awareness that not everything can be built, [5] to the “unprecedented morphology freedom” of the present. [6] Therefore, “We are limited in what we can build by what we are able to communicate. Many of the problems we now face”, as Hugh Whitehead of Foster and Partners points out, “are problems of language rather than technology. The experience of Swiss Re established successful procedures for communicating design through a geometry method statement”. [7] 

 “Parametric modelling”, Foster and Partners stated, “had a fundamental role in the design of the tower. The parametric 3D computer modelling process works like a conventional numerical spreadsheet. By storing the relationships between the various features of the design and treating these relationships like mathematical equations, it allows any element of the model to be changed and automatically regenerates the model in much the same way that a spreadsheet automatically recalculates any numerical changes. As such, the parametric model becomes a ‘living’ model – one that is constantly responsive to change – offering a degree of design flexibility not previously available. The same technology also allows curved surfaces to be ‘rationalized’ into flat panels, demystifying the structure and building components of highly complex geometric forms so they can be built economically and efficiently”. [8] 

Of course, communication is here understood within a very specific part of the design process, mainly connected with fabrication issues and their optimisation, but it is a concept that involves many layered levels of meaning. [9] Curiously, this shift from the physical to the immaterial reminds us of the same step made by Leon Battista Alberti, who conceived design as a purely intellectual construct and was obsessed by its transmission from idea to built form without information decay. [10] Digital innovation promises to better connect the engineering process (focus on the object) with the wider reality (the architectural perspective), enabling design teams to deal with increasingly complex sets of variables. Freedom comes, however, with the disruption of the design toolbox, usually more defined by constraints than capabilities, so that the resulting wild fluctuations of effects seem increasingly disconnected from any cause. Design choices are therefore looking for multifaceted narrative support – and the “Gherkin”, with its combination of neo-functional-sustainable storytelling and metaphorical shape, turns out to be emblematic from this point of view too. [11] 

Furthermore, extensive numerical simulations raise a question as to what extent they prove reliable, both because of their intrinsic functionality and the “black box” effect connected to the algorithmic devices. Those latter, especially in the latest applications of artificial intelligence such as neural networks, produce results through processes that remain obscure even to their designers, let alone less-aware users. Besides, the coupling of simulation with generative modelling through interactivity may not assist the designer in developing the understanding that, in several cases, (small) changes in the (coded) hypotheses can produce radically different solutions. Thus, the time spent in simulating alternatives can be more profitably spent working on different design hypotheses, and on architectural, technological and structural premises, perhaps with simpler computational models. 

Are we entering an era of computational empiricism, as some authors maintain? [12] 

Languages of innovation 

Generative modelling, morphogenesis, parametric tooling, computational and performative design… all these apparatuses have brought methodological innovation into closer integration among different disciplines, bridging the gaps between fields. Modelling the project, the main common aim of this effort, has from the beginning leaned on logics and mathematics as a shared lingua franca. [13] Since the 1960s, applied mathematics has extended its applications through the formalisation process of information technology, which has developed the tools and the models beneficial for the purposes of science and technology. Information and communication technology puts into effect “the standardisation and automation of mathematical methods (and, as such, a reversal of the relationship of domination between pure mathematics and applied mathematics and, more generally, between theory and engineering)”. [14] 

The redefinition of roles, between theories and techniques when applied to design, began in mathematics and physics with a metamorphosis of language, [15] with a shift towards symbolic languages that have gone beyond the mechanics of structures and the thermodynamics of buildings, subjecting it to automatic calculus, and finalising it in computation. [16] “Today, it is a widely held view that the advent of electronic computation has put an end to this semiempirical era of engineering technology: sophisticated mathematical models can now be constructed of some of the most complex physical phenomena and, given a sufficiently large computer budget, numerical results can be produced which are believed to give some indication of the response of the system under investigation”. [17] 

The straightforward capability to model and simulate projects, supported by the evidence of results, has given confidence in the emerging computational tools, highlighting the dualism between the desire to make these devices usable for a wide range of practitioners, in a variety of cases and contexts, and the exigency of grounding bases for deeper understanding within a reflective practice. Moreover, the very nature of using digital tools urges designers to face an increasing risk of becoming “alienated workers” who, in Marxian terms, neither own their means of production in actuality – software companies lease their products and protect them against unauthorised modifications – nor, above all, conceptually, since their complex machinery requires a specifically dedicated expertise. Therefore, within the many questions this paradigm shift is raising about the redefinition of theories and practices and their mutual relationship, a main concern regards educational content and approaches, in terms of their ability to provide useful knowledge to future practitioners and aid their impact on society. In the architectural design field – which traditionally crossbreeds arts, applied sciences, and humanities in order to fulfil a broad role of mediation between needs and desires – this means dealing with an already contradictory pedagogic landscape in which ideologically opposite approaches (namely method-oriented and case-oriented pedagogies) overlap. 

The specific of architectural design teaching does not escape this tension between methodological ambitions, nurtured by modern thinking and its quest for rationalisation, and the interplay between generations, languages and attitudes involved by learning through examples – even with its paradoxical side effects. One would expect in fact that a “positive” (according to Christopher Alexander), rule-based training should yield more open-ended outcomes than the “negative”, academic, disciplinary learning by copying. [18] But, on the one hand, the methodological approach implies an idea of linear control – towards optimisation and performance as well as in social and political terms – which reveals its origin in Enlightenment positivism. The Durandian apparatuses so widespread after World War II, with their proto-algorithmic design grammars, ended up accordingly with the reproduction of strict language genealogies. A similar trend seems to be emerging nowadays, in the convergence toward the same effective solutions in arts, sports, and whatever else, as a by-product of digital efficiency – which even the very technical camp is questioning. On the other hand, tinkering with the interpretation and application of examples makes possible the transmission of the many unspoken and unspeakable aspects connected to any learning endeavour. Getting closer to “good” examples – testing their potential according to specific situations – allows their inner quality to be grasped, reignited in different conditions, and finally transcended. Since forgetting requires something to be forgotten, Alexander is somehow right in framing this teaching attitude as “negative”: ironically, imitation provides the learning experience through which personal voices can emerge and thrive. 

Challenges ahead 

Turpin Bannister considered that in “an age of science”, architects “abandoned the scientific approach as detrimental to the pure art of design. On even the simplest question they acquiesced to their engineer and so-called experts”. [19] The pervasive penetration of computation in design would probably have met Bannister’s approval. The consequences and methodological implications are so far-reaching that they raise questions: how must education deal with the increased role of interactive computation in architectural design? And, more generally, with techno-science, its languages and methodologies? 

Architectural design still relies on a “political” attitude, and mediation between the “two cultures” [20] is a fundamental asset of its disciplinary approach. Even though the unity of knowledge has disappeared with the advent of modern science, as Alberto Pérez-Gómez stated, [21] we ideally aspire to become like renaissance polymaths, mastering state-of-the-art skills in the most disparate fields. But in the long time that separates us from Brunelleschi and Alberti, the amount of knowledge required by the different aspects of the practice, even those which are specifically architectural, has grown exponentially, and trying to get a minimum of mastery over it would demand a lifelong commitment and extraordinary personal qualities. Digital prostheses promise to close the gap between the desire for control over the many facets of the design process and the real possibility of achieving it. Some consequences of the augmented agency provided by new information and communication technologies are already evident in the overlapping occurring in the expanded field of the arts, with protagonists from different backgrounds – visual arts or cinema for instance – working as architects or curators and vice versa. [22] The power of the digital to virtually replace those “experts”, to whom, according to Turpin Bannister, architects outsource their own choices, seems to act therefore as an evolutionary agent against overspecialisation, confirming the advantage Bucky Fuller attributed to the architect as the last generalist. [23] 

However, without understanding and manipulating what happens within the black box of the algorithm, we still face the risk of being “designed” by the tools we put our trust in, going on to accept a subordinate position. Speaking machine, as John Maeda has pointed out, [24] is becoming necessary in order to contribute innovatively to any design endeavour. The well-known Asian American researcher, designer, artist and executive comes from a coding background, later supplemented with the study and practice of design and arts (along with business administration). His educational path and personal achievements indicate that such an integration of expertise is possible and desirable, even though his logical-mathematical grounding is likely the reason he mostly works with the immaterial, exploring media design and the so-called experience economy. Architectural schools are therefore facing the issue of if, when, and how to introduce coding skills into their already super-crammed syllabuses – from which, very often, visual arts, philosophy, law, storytelling and other much needed approaches and competencies are absent. One can argue that coding would provide young professionals with expertise they could immediately use in the job market, enabling them to better interact with contemporary work environments. On the other hand, a deeper perspective shows how the “resistance” of architectural specificity produced exceptional results in revolutionary times: academic education acted for the Modern masters as both a set of past, inconsistent practices to overcome and a background that enhanced the quality of their new language. 

Digitalisation looks like a further step along the process of the specialisation of knowledge, which unfolded hand-in-hand with the development of sciences, techniques, and their languages. Since the dawn of the modern age, architects have often tried to bring together a unified body of knowledge and methodology; first around descriptive geometry, and then around geometry as a specific discipline which “gives form” to mathematics, statics and mechanics. “Geometry is the means, created by ourselves, whereby we perceive the external world and express the world within us. Geometry is the foundation”, Le Corbusier writes in the very first page of his Urbanisme, trying to keep pace with modernisation and establishing a new urban planning approach according to its supposed “exactitude”. [25] But while hard sciences and their technical application rely on regularity of results in stable experimental conditions, architects are still supposed to give different answers to the same question – or, more precisely, to always reframe architectural problems, questioning them in different ways. 

Considering the volatility of the present situation, opening up and diversifying the educational offer seems a viable bet, more so than the attempt to formulate a difficult synthesis. Only by being exposed to the conflict between the selective, deterministic optimisation promise of code-wise design, and the dissipative, proliferating, unpredictable interpretation of cases can architects find their own, personal way to resolve it. 

Fig. 1 Norman Foster’s sketch for the headquarters of the Swiss Reinsurance Company, 30 St Mary Axe, in the historic core and the financial district of the City of London. Foster + Partners designed a fifty-storey tower 590ft (180 m) with a magnificent organic form that adds a distinctive identity to the skyline of the city.
Fig. 2 Norman Foster’s sketch illustrates the generative process: each floor is rotated by 5° relative to the one below around the central core with the pillars, bearing the vertical loads, the services, the stairs, and the lifts. From the core, six ‘spokes’ host the floorspace at each level. Each floorspace is detached from the next by a void triangular area about 20° wide. The vertically open areas create light wells for the height of the tower, up to the thirty-second floor. These open areas wound in coils to flow ventilation and natural lighting inside the building.
Fig. 3 The sketch of Norman Foster for the fully-glazed domed restaurant atop of the tower.
Fig. 4 The tapering profile of the tower allows reduced area at street level 160 ft (49 m), and reaches the largest diameter of 184 ft (56 m) at the 21st level, with the spatial climax at the glazed domed roof. The diagrid structure parametrises the A-shaped frames, and relieves the lateral loading from the central core. The A-shaped frames develop over two floors, and decrease the proportions from the 21st level respectively towards the pitched dome and the lobby level.

Fig. 5 Norman Foster’s sketch makes clear how the A-shaped frames take on the diagrid geometry with two diagonal columns of tubular steel 20 in (508 mm) diameter, reflecting in the diamond-shaped backgrounds of the window panes.

References

[1] S. Roudavski, “Towards Morphogenesis in Architecture”, International Journal of Architectural Computing, 3, 7 (2009) https://www.academia.edu/208933/Towards_Morphogenesis_in_Architecture (accessed 24 March 2021).  

[2] E. T. Miret, J. J. Polivka and M. Polivka, Philosophy of Structures, (Berkeley: University of California Press, 1958), 331.  

[3] P. L. Nervi, Aesthetics and Technology in Building (Cambridge, Mass.; London; Harvard University Press: Oxford University Press, 1966), 199. 

[4] T. Oden, K.-J. Bathe, “A commentary on Computational Mechanics”, Applied Mechanics Reviews, 31, 8 (1978), 1055-1056. 

[5] “We can now wonder whether any type of imaginary surface, is constructible. The answer is in the negative. So: how to choose and how to judge an imagined form?” E. T. Miret, J. J. Polivka and M. Polivka, Philosophy of Structures, (Berkeley: University of California Press, 1958) 78. 

[6] M. Majowiecki, “The Free Form Design (FFD) in Steel Structural Architecture–Aesthetic Values and Reliability”, Steel Construction: Design and Research, 1, 1 (2008), 1. 

[7] A. Menges, “Instrumental geometry”, Architectural Design, 76, 2 (2006), 46. 

[8] Foster and Partners, “Modeling the Swiss Re Tower”, ArchitectureWeek, 238 (2005), http://www.architectureweek.com/2005/0504/tools_1-1.html (accessed 10 April 2022) 

[9] “[Marjan] Colletti aptly quotes Deleuze stating: ‘The machine is always social before it is technical.’ The direct interaction between the designer and the equipment provides a feedback system of communication. He argues that the computer should ‘be regarded neither as abstract nor as machine’, but rather as an intraface.” C. Ahrens, “Digital Poetics, An Open Theory of Design-Research in Architecture”, The Journal of Architecture, 21, 2, (2016), 315; Deleuze’s passage is in G. Deleuze, C. Parnet, Dialogues (New York: Continuum International Publishing, 1987), 126-12; Colletti’s in M. Colletti, Digital Poetics, An Open Theory of Design-Research in Architecture (Farnham: Ashgate, 2013), 96. 

[10] “We shall therefore first lay down, that the whole Art of Building consists in the Design, and in the Structure. The whole Force and Rule of the Design, consists in a right and exact adapting and joining together the Lines and Angles which compose and form the Face of the Building. It is the Property and Business of the Design to appoint to the Edifice and all its Parts their proper Places, determinate Number, just Proportion and beautiful Order; so that the whole Form of the Structure be proportionable. Nor has this Design any thing that makes it in its Nature inseparable from Matter; for we see that the same Design is in a Multitude of Buildings, which have all the same Form, and are exactly alike as to the Situation of their Parts and the Disposition of their Lines and Angles; and we can in our Thought and Imagination contrive perfect Forms of Buildings entirely separate from Matter, by settling and regulating in a certain Order, the Disposition and Conjunction of the Lines and Angles.” L. B. Alberti, The Ten Books of Architecture (London: Edward Owen, 1755 [1450]), 25. 

[11] A. Zaera-Polo, “30 St. Mary Axe: Form Isn’t Facile”, Log, 4 (2005). 

[12] See – along with Oden, Bathe, and Majowiecki – Paul Humphreys, “Computational Empiricism”, Topics in the Foundation of Statistics, ed. by B. C. van Fraassen (Dordrecht: Springer, 1997) and P. Humphreys, Extending Ourselves: Computational Science, Empiricism, and Scientific Method. (New York: Oxford University Press, 2004). 

[13] C Alexander, Notes on the Synthesis of Form (Cambridge, Mass.; London: Harvard University Press, 1964). 

[14] J. Petitot, “Only Objectivity”, Casabella, 518, (1985), 36. 

[15] E Benvenuto, An Introduction to the History of Structural Mechanics (New York, N.Y.: Springer-Verlag, 1991). 

[16] M. Majowiecki, “The Free Form Design (FFD) in Steel Structural Architecture–Aesthetic Values and Reliability”, Steel Construction: Design and Research, 1, 1 (2008), 1. 

[17] T. Oden, K.-J. Bathe, “A commentary on Computational Mechanics”, Applied Mechanics Reviews, 31, 8 (1978), 1056. 

[18] “There are essentially two ways in which such education can operate, and they may be distinguished without difficulty. At one extreme we have a kind of teaching that relies on the novice’s very gradual exposure to the craft in question, on his ability to imitate by practice, on his response to sanctions, penalties, and reinforcing smiles and frowns. … The second kind of teaching tries, in some degree, to make the rules explicit. Here the novice learns much more rapidly, on the basis of general ‘principles’. The education becomes a formal one; it relies on instruction and on teachers who train their pupils, not just by pointing out mistakes, but by inculcating positive explicit rules.” C. Alexander, Notes on the Synthesis of Form (Cambridge, Mass.; London: Harvard University Press, 1964), 35. 

[19] T. C. Bannister, “The Research Heritage of the Architectural Profession”, Journal of Architectural Education, 1, 10 (1947). 

[20] C. P. Snow, The Two Cultures and the Scientific Revolution  (Cambridge University Press, 1962). 

[21] A. Pérez-Gómez, Architecture and the Crisis of Modern Science (Cambridge, Mass.: The MIT Press, 1983). 

[22] “Artists after the Internet take on a role more closely aligned to that of the interpreter, transcriber, narrator, curator, architect.” A. Vierkant, The Image Object Post-Internet, http://jstchillin.org/artie/vierkant.html (accessed 21 September 2015). The artist Olafur Eliasson, for instance, started up his own architectural office (https://studiootherspaces.net/, accessed 30 March 2021), and the film director Wes Anderson authored the interior design of the Bar Luce, inside the Fondazione Prada in Milan. 

[23] “Fuller … noted that species become extinct through overspecialization and that architects constitute the ‘last species of comprehensivists.’ The multidimensional synthesis at the heart of the field is the most invaluable asset, not just for thinking about the future of buildings but for thinking about the universe. Paradoxically, it is precisely when going beyond buildings that the figure of the architect becomes essential.” Mark Wigley, Buckminster Fuller Inc.: Architecture in the Age of Radio (Zürich: Lars Müller, 2015), 71. 

[24] J. Maeda, How to Speak Machine: Laws of Design for a Digital Age (London: Penguin Business, 2019). 

[25] Le Corbusier, The City of Tomorrow and its Planning (London: John Rocker, 1929 [1925]), 1. 

Suggest a Tag for this Article
Geocities’ neighbourhoods collage, 2022. Image credit: Alessandro Celli and Ibrahim Kombarji.
Geocities’ neighbourhoods collage, 2022. Image credit: Alessandro Celli and Ibrahim Kombarji.
Fostering Kinship: GeoCities’ Algorithmic Neighbourhoods
Algorithmic Neighbourhoods, civic participation, global village, Kinship, proximity, virtual city
Alessandro Celli, Ibrahim Kombarji

celli.alce@gmail.com
Add to Issue
Read Article: 3050 Words

The remains of a virtual city – possibly the first of its kind – can be found on servers all over the world.1 Geocities was launched as a series of districts, alleyways, and neighbourhoods where its inhabitants could build their own webpages. For the first time, the internet was given a structure in a way that its audience could relate to it on a human scale. Today, around 650 gigabytes of Geocities’s data remain accessible thanks to archiving efforts that ensured the recovery of some of the 38 million individual websites that existed at the time of GeoCities’ final closure in 2009. [2] [3] [4] [5] 

GeoCities was first launched in 1994 by David Bohnett and Dick Altman as a web hosting service, allowing its users to store and manage their website files. [6] Its initial name, Beverly Hills Internet, already hinted at the creators’ intention to develop a neighbourhood of websites, which would later mature into a geography of cities. The service offered a free plan with a generous two megabytes of storage to all users, known as the homesteaders, who were asked to choose a neighbourhood to reside in. [7] All of the city’s inhabitants occupied a defined space, in a defined surrounding, where their homepages were arranged within neighbourhoods. Each cluster of pages was spatially close to those which shared similar content, while each neighbourhood was defined by the broader topic into which they fit. As such, the company created and thematically organised its web directories into six neighbourhoods, which included Colosseum, Hollywood, RodeoDrive, SunsetStrip, WallStreet and West Hollywood. New neighbourhoods, as well as their suburbs, were later added as the site grew, and became part of the members’ unique web address with a sequentially assigned URL “civic address” (e.g., “www.geocities.com/RodeoDrive/54”). Chat rooms and bulletin boards were added soon after, fostering rapid growth of the city. [8] Each neighbourhood had its own forum, live chat, and even a list of all the homesteaders who celebrated their birthday each day.  

By December 1995, when it changed its name to GeoCities, Beverly Hills Internet had over 20,000 homesteaders and over 6 million page-views per month. [9] Within this expansive organisation of web page clusters, a seamless sense of proximity between those who shared similar ideas naturally led to human behaviours such as kinship and affection between them.  

Neighbourhoods are intrinsic parts of our urban fabric and a self-evident manifestation of how the cities we live in are structured. [10] Yet, we still struggle to grasp a proper definition of their totality, given the complex layers within them. In 1926, progressive educator David Snedden defined the term neighbourhood as “those people who live within easy ‘hallooing’ distance”, illustrating it as a space where one can easily catch the attention of another. [11] 

This essay will explore the notion of an algorithmic neighbourhood, one that reflects – and derives from – parts of a physically built, “hallooing” urban neighbourhood. The internet lexicon of today descends seamlessly from a long lineage of architectural and spatial terminologies, such as firewall, coding architecture, homepage, platform, address, path, room, and location, among many others. In the translation from a physical reality that is shaped within our Latourian “critical zone”, some of these terminologies have shifted in their meaning when applied to new forms of digital space. [12] A parallel “digital critical zone” is generated, within which these algorithmic neighbourhoods sit.  

Figure 1 – Archived webpage “Tia”, West Hollywood neighbourhood. Image capture April 16 2022. Source: https://geocities.restorativland.org/WestHollywood/Cafe/3232/newpics.html
Figure 2 – Archived webpage “The Gardening Girl”, Picket Fence neighbourhood. Image capture April 16 2022. Source: https://geocities.restorativland.org/PicketFence/1054/

Neighbourhood as a site of kinship and proximity  

The artisanal web built through GeoCities allowed “user-generated content”, which had not yet adorned itself with pompous names or revolutionary pretensions. [13] It proved that even before the invention of Web 2.0 – which was later aimed at implementing social-media profiles – the web was, above all, a story of human beings who interact with one another and discuss the subjects close to them through the means at hand.  

Urban studies professor Looker defines the United States as a nation of neighbourhoods. [14] This essay expands on this exposure of the continental urban fabric by exploring the communities of algorithmic kinship that exist within GeoCities’ virtual borders. Similar to physically built neighbourhoods, GeoCities’ urban structure fostered kinship and affection among its inhabitants. PicketFence, for example, was built to allow residents to share tips and advice on ‘Home Improvement Techniques’. The more experienced ‘Home Improvement’ users became the neighbourhood’s go-to people for navigating daily issues, reinforcing a shared communal knowledge. [15] 

West Hollywood, which was subdivided into “Gay, Lesbian, Bisexual, and Transgender topics”, is another example of such algorithmic kinship. This neighbourhood was a predecessor of today’s social-media spaces where users can gather and exchange (sometimes hidden or undisclosed) realities across communities. West Hollywood’s users could leave messages, sign a guestbook, and share contact information with one another. The neighbourhood gave people an opportunity to share similar experiences and daily struggles, form alliances with other communities, and tackle queer rights collectively. Moreover, West Hollywood fostered arenas of “block-level solidarity”, where “bonds and loyalties – whether as enacted on real-life pavements or as represented in stories, images, and speeches”, allowed connections between the intimate lives of users, their GeoCities pages, and the “city block”. [16] 

Proximity and reciprocal kinship were thus a foundational feature of GeoCities’ design: individuals, together with their personal pages, were at the centre of the Internet. In contrast, today’s platforms and digital services are structured in such a nested way that proximity is sometimes inconceivable, and individuals are reduced to anonymous consumers of information. Today, the information communications technology industry (ICT) is at the centre of the Internet. [17] Social media platforms still provide virtual spaces that allow communities to gather and share content with one another, fostering a certain degree of human interaction. However, the very structure within which they operate is fundamentally different from the ones used in early platforms such as GeoCities. While before, the digital matter – text, images, links – was spatially placed onto the transparent structure of the webpage, and you could clearly see the location of a jpeg file within the HTML lines of code, now it all runs through opaque interfaces. [18] These perfect facades are quasi-impenetrable for users, and hide the “black boxes” where algorithms operate as instruments of measurements and perception. [19] As a counterpart to algorithmic neighbourhoods, Caroline Busta defines social-media platforms as a grand bazaar, “with lanes of kiosks, grouped roughly by trade, displaying representative works to passers-by. At the back of the mini-shop is a trap door with stairs leading to a sub-basement where deals can be done”. [20] This multi-layered opaque architecture of the bazaar illustrates the complex structure that currently governs social-media platforms. In contrast, the algorithmic neighbourhoods of GeoCities attempted to encourage a transparent vision of the modes of portraiture in the digital realm, and defined tools for users to relate directly to it. 

Figure 3 – Archived webpage “Gay Ukraine International, Kiev, UA”, West Hollywood neighbourhood. Image capture April 16 2022. Source: https://geocities.restorativland.org/WestHollywood/Club/1213/
Figure 4 – Archived webpage “Welcome to the deep Heart of TEXAS and Our Home”, Picket Fence neighbourhood. Image capture April 16 2022. Source: https://geocities.restorativland.org/PicketFence/1011/

Neighbourhood as a site constantly ‘under construction’   

A digital archaeologist scavenging through GeoCities’ remains would come across a vast number of “under construction” signs strewn across the neighbourhood’s alleys, outlining its “work-in-progress” state. Surrounded by virtual scaffolding, the pages under construction were built, line after line of code, by the homesteaders, slowly undergoing organic changes and upgrades. Each individual page was constructed by its creator, from its foundations to its decorative elements, in the HTML format – the HyperText Markup Language. The coding language not only allowed users to build their pages from scratch, but also to introduce multimedia resources such as JPGs and GIFs. A page under construction implies that there was a process of creation, which aimed at an eventual final form. Similar to a construction site, the individual web page could be openly observed throughout its making, as it could be visited by GeoCities inhabitants at any moment in time. It was a facade yet to come; a page that was shaped by the algorithmic manipulation of its users as they added another ‘about me’ section, a ‘guestbook’ to be signed, or a photo gallery of low-res pictures – to fit within the 2 megabytes limit – portraying their personal lives. 

Differently, the architecture of new forms of webpages and content aggregators is now conceived with an opaque algorithmic structure. Their virtual space is not one of proximity and distance based on intelligible parameters, but one of hierarchical appearance and disappearance based on unintelligible instruments of perception. [21] For instance, Google’s page-ranking algorithm mutates and evolves over time, leaving no traces behind, except the ones it uses to train itself. When presented with Google search results, users are faced with a series of temporary choices that are the result of a very intricate mechanism of automatic selection and classification. Vladan Joler defines algorithms as “instruments of measurements and perception”; thus, algorithmic architecture can be outlined by an operation of the more-than-human. Data collection and consumer profiling are the parameters upon which the current Internet is being built, instead of it being a conscious construction process carried out by its users. 

While the architectural backdrop of a platform is constantly being redefined based on who is interacting with it, its facade – the interface – is pure and familiar. This interface which we constantly visit, however, obscures what’s beneath it. Even if it is a clear manifestation of rules, as it tells you what you can or cannot do, it does not reveal through which mechanisms it gathers and conveys information, nor how the user’s actions are exploited for profitable means. The algorithmic design of GeoCities, based on neighbourhood alliances, had not yet allowed for this opacity, avoiding instances of power structures, black boxes, and opaque interfaces. It also avoided entering the black hole of rhizomatic surveillance that now permeates the virtual realm. [22] [23]  

Algorithmic neighbourhoods can also help to expose the physical infrastructure hosting them. Similarly, to the opaqueness of interfaces, our built neighbourhoods are shaped by an underground infrastructure of fleshly cables and routers. Data centres, globally connected by a web of cables, host our digital selves, which wander through the unmeasurable geographies of the Internet. They are out of reach, transcending any geographical boundary, as they mirror the ubiquitous nature of algorithmic spaces. Cables and data centres are, in fact, the physical side of the Internet, its thickness on our planet. They are the physical neighbourhood mirroring the algorithmic one, hosting the latter through servers, cables, connections, and energy. The physical neighbourhood which creates the digital infrastructure is not, however, a direct reflection of the algorithmic one. It is instead expansive, ubiquitous, fragmented, and absent, as it is designed to operate under strict safety protocols and privacy regulations.  

Figure 5 – Archived webpage “Q Pals”, West Hollywood neighbourhood. Image capture April 16 2022. Source: https://geocities.restorativland.org/WestHollywood/Cafe/3113/
Figure 6 – Archived webpage “Monica Munro”, West Hollywood neighbourhood. Image capture April 16 2022. Source: https://geocities.restorativland.org/WestHollywood/Club/2788/

Neighbourhood as a site of civic participation and resistance  

In June 1998, in order to boost brand awareness and advertising impressions, GeoCities introduced a watermark on its users’ web pages. [24] The watermark, much like an on-screen graphic on some TV channels, was a transparent floating GIF image that used JavaScript to stay displayed at the bottom right of the browser window. Many users felt that the watermark interfered with their website design, and threatened to move their pages elsewhere. A year later, in 1999, Yahoo bought the platform and consequently implemented its “Terms of Service agreement” leading to a unanimous reaction by the homesteaders. [25] The “Haunting of GeoCities” was the users’ response to the threat over content rights and access control. Each neighbourhood became a ghost town, where homepages were stripped of their content and colours, replaced with excerpts of the offending Terms of Service. As authors Reynolds and Hallinan point out, “users sensed that Yahoo’s unfettered access to this content threatened their creative control and diluted their power to make decisions about how and where to display their content. … some enterprising homesteaders sought to foil Yahoo’s legal and digital access to their intellectual property by removing it from the service altogether”. [26] The collective operation, moreover, represented a strategic mobilization of GeoCities design, defined by co-founder David Bohnett as “a bottoms-up, user-generated content mode”. [27] [28] The homesteaders’ remarkable political response allowed them to preserve a certain degree of control over their content, interfering with the dominating “Terms of Service agreement” which regulates, even more so today, every action we take within a platform. 

The “Haunting” protest represented a point of resistance towards the tendency of tech-giants to channel social traffic through a corporate digital platform ecosystem – a ubiquitous model in today’s internet. [29] The organized response by the homesteaders was only possible by the virtue of the very architecture of GeoCities. Neighbourhoods allowed a bottom-up response that could contrast the overarching corporate control put in place by Yahoo. It was a gathering that was empowered by proximity and affection, while it could exploit the temporary nature of the homepages’ construction as a medium for political change. In 2009, in response to the termination of GeoCities by Yahoo, new mechanisms of neighbourly rebuttal emerged. The German hosting provider JimdoWeb, for instance, attempted to host the nomad homesteaders by launching the Lifeboat for GeoCities webpage. Simultaneously, efforts of internet archivists started to meticulously archive each homepage of GeoCities in a countering act to preserve memory and gather residues of the city. 

The archived remains of the virtual city stand as an alternative approach to the complexity and opaqueness of the algorithmic layering of contemporary web-hosting services, as much as they reveal the ‘trans-scalar’ infrastructure of the Internet. [30] These neighbourly entanglements help us make sense of the current digital “global village”, offering an entry point to analyse how it is being shaped by the effects of globalisation, market economies, and imprudent media. [31] [32] Moreover, they display how the global village is being governed by algorithmic interdependencies, which in turn affect the architectural formations in both virtual and physical realities. [33]  

Figure 7 – Archived webpage “Gay Denton”, West Hollywood neighbourhood. Image capture April 16 2022. Source: https://geocities.restorativland.org/WestHollywood/Cafe/1979/Pages/gaydenton.html
Figure 8 – Geocities’ neighbourhoods collage, 2022. Image credit: Alessandro Celli and Ibrahim Kombarji.

References

[1] Archive Team. Archiveteam.org. https://wiki.archiveteam.org/index.php?title=Main_Page (accessed April 16, 2022).

[2] R. Vijgen. “The Deleted City”, http://www.deletedcity.net/, (2017)

[3] Restorativland, “The Geocities Gallery”, https://geocities.restorativland.org/, (accessed March 1, 2022).

[4] “OoCities”, https://www.oocities.org/#gsc.tab=0, (accessed March 1, 2022).

[5] O. Lialina & D. Espenschied, “One Terabyte of Kilobyte Age”, Rhizome.org. https://anthology.rhizome.org/one-terabyte-of-kilobyte-age, (accessed March 1, 2022).

[6] A.J. Kim, Community Building on the Web: Secret Strategies for Successful Online Communities (United Kingdom: Pearson Education, 2006).

[7] B. Sawyer, D Greely, Creating GeoCities Websites, (Cincinnati, Ohio: Muska & Lipman Pub, 1999) .

[8] Ibid.

[9] C. Bassett, The arc and the machine: Narrative and new media, (Manchester: Manchester University Press, 2013).

[10] J. Jacobs, “The City: Some Myths about Diversity”, The death and life of great American cities, (New York: Random House, 1961).

[11] R. Sampson, “The Place of Context: A Theory and Strategy for Criminology’s Hard Problems”, Criminology 51 (The American Society of Criminology, 2013).

[12] B. Latour, Critical Zones: The Science and Politics of Landing on Earth, (Cambridge, MA: MIT Press, 2020).

[13]  B. Sawyer, D Greely, Creating GeoCities Websites, (Cincinnati, Ohio: Muska & Lipman Pub, 1999).

[14] B. Looker, A Nation of Neighborhoods: Imagining Cities, Communities, and Democracy in Postwar America, (Chicago: The University of Chicago Press, 2015).

[15] Ibid.

[16] Ibid.

[17] C. Busta, “Losing Yourself in the Dark”. Open Secret, KW Institute for Contemporary Art, https://opensecret.kw-berlin.de/essays/losing-yourself-in-the-dark/, (accessed April 16, 2022).

[18] S.U. Noble, Algorithms of Oppression: How Search Engines Reinforce Racism,. (United States: NYU Press, 2018).

[19] V. Joler, “New Extractivism”, Open Secret, KW Institute for Contemporary Art, https://opensecret.kw-berlin.de/artwork/new-extractivism/, (accessed April 16, 2022).

[20]  C. Busta, “Losing Yourself in the Dark”. Open Secret, KW Institute for Contemporary Art, https://opensecret.kw-berlin.de/essays/losing-yourself-in-the-dark/, (accessed April 16, 2022).

[21]  V. Joler, “New Extractivism”, Open Secret, KW Institute for Contemporary Art, https://opensecret.kw-berlin.de/artwork/new-extractivism/, (accessed April 16, 2022).

[22] D. Savat, “(Dis)Connected: Deleuze’s Superject and the Internet”, International Handbook of Internet Research, 423–36 (Dordrecht: Springer, 2009).

[23] K.D. Haggerty, R. Ericson, “The Surveillant Assemblage”. British Journal of Sociology, 51, 4, 605-622, (United Kingdom: Wiley-Blackwell for the London School of Economics, 2000).

[24] J. Hu, “GeoCitizens fume over watermark”, CNet.com, https://www.cnet.com/tech/services-and-software/geocitizens-fume-over-watermark/ (accessed March 1, 2022).

[25] R. Ku, Cyberspace Law: Cases and Materials, (New York: Wolters Kluwer, 2016).

[26] C. Reynolds, B. Hallinan, “The haunting of GeoCities and the politics of access control on the early Web”, New Media & Society, (United States: SAGE Publishing, 2021).

[27] Ibid.

[28] B McCullough, “Interview with David Bohnett, founder of GeoCities”. Internet History Podcast, http://www.internethistorypodcast.com/2015/05/david-bohnett-founder-of-geocities/, (accessed April 16, 2022).

[29] J. Van Dijck, T. Poell, M. De Waal, The Platform Society: Public Values in a Connective World, (Oxford: Oxford University Press, 2018).

[30] A. Jaque, Superpowers of Scale, (New York: Columbia University Press, 2020).

[31] M. McLuhan, The Gutenberg galaxy: the making of typographic man (Toronto: University of Toronto Press, 1962).

[32] T. Friedman, The World Is Flat: A Brief History of the Twenty-First Century, (New York: Farrar, Straus and Giroux, 2005). [1] M. McLuhan, The Gutenberg galaxy: the making of typographic man, (Toronto: University of Toronto Press, 1962).

Suggest a Tag for this Article
Figure 8 - Extraction process – on the left the digital model, and on the right the sequence of instructions resulting from the extraction process. 
Figure 8 – Extraction process – on the left the digital model, and on the right the sequence of instructions resulting from the extraction process. 
Algorithmic Representation Space
Algorithmic Abstractness, Algorithmic Design, Algorithmic Representation Space, Design Paradigms, Model Concreteness, Representation Method, Representation Space
Renata Alves Castelo Branco, Inês Caetano, António Leitão

renata.castelo.branco@tecnico.ulisboa.pt
Add to Issue
Read Article: 5565 Words

Introduction 

Architecture has always explored the latest technological advances, causing changes in the way architects represent and conceive design solutions. Over the past decades, these changes were mostly due to, first, the integration of new digital design tools, such as Computer-Aided Design (CAD) and Building Information Modelling (BIM), which allowed the automation of paper-based design processes [1], and then, the adoption of computational design approaches, such as Algorithmic Design (AD), causing a more accentuated paradigm shift within the architectural practice. 

AD is a design approach based on algorithms that has been gaining prominence in both architectural practice and theory [2,3] due to its greater design freedom and ability to automate repetitive design tasks, while facilitating design changes and the search for improved solutions. Its multiple advantages have therefore motivated a new generation of architects to increasingly adopt the programming environments behind their typical modelling tools, going “beyond the mouse, transcending the factory-set limitations of current 3D software” [3; p. 203]. Unfortunately, its algorithmic nature makes this approach highly abstract, deviating from the visual nature of human thinking, which is more attracted to graphical and concrete representations than to alphanumerical ones.  

To approximate AD to the means of representation architects typically use and thereby make the most of its added value for the practice, we need to lower the existing comprehension barriers, which hinder its widespread adoption in the field. To that end, this research proposes a new approach to the representation of AD descriptions – the Algorithmic Representation Space (ARS) – that encompasses, in addition to the algorithm, its concrete outputs and the mechanisms that contribute to its understanding. 

Algorithmic Representation Method and Design Paradigms

Despite the cutting-edge aura surrounding it, AD is a natural consequence of architects’ desire to automate modelling tasks. In this approach, the architect develops algorithms whose execution creates the digital design model [4] instead of manually modelling it using a digital design tool. Compared to traditional digital modelling processes, AD is advantageous in terms of precision, flexibility, automation, and ease of change, allowing architects to explore wider design spaces easily and quickly. Two AD paradigms currently predominate, the main difference between them lying in the way algorithms are represented: architects develop their algorithms either textually, according to the rules of a programming language, or visually, by selecting and connecting graphical entities in the form of graphs [5]. In either case, the abstract nature of the medium hinders its comprehension. 

Algorithms are everywhere and are a fundamental part of current technology. In fact, digital design tools have long supported AD, integrating programming environments of their own to allow users to automate design tasks and deal with more complex, unconventional design problems. Unfortunately, despite its advantages and potential to overcome traditional design possibilities, AD was slow to gain ground in the field, remaining, after almost sixty years, a niche approach. One of the main reasons is the fact that it requires architects to learn programming, which is an abstract task that is far from trivial. This is aggravated by the fact that, for decades, most tools have had their own programming language, which in most cases was limited and hard to use, as well as a programming environment providing little support for the development and comprehension of algorithmic descriptions. Examples include ArchiCAD’s GDL (1983); AutoCAD’s AutoLisp (1986) and Visual Lisp (2000); 3D Studio Max’s MAXscript (1997); and Rhinoceros 3D’s Rhino.Python (2011) and RhinoScript (2007). 

To make AD more appealing to architects and approximate it to the visual nature of architectural design processes, visual-based AD environments have been released in the meantime. In these environments, text-based algorithmic descriptions are replaced by iconic elements that can be connected to each other in dataflow graphs [6]. Generative Components (2003) is a pioneering example that inspired more recent ones such as Grasshopper (2007) and Dynamo (2011). These tools offer a database of pre-defined operations (components) that users can access by simply dragging an icon onto the canvas and providing it with input parameters. For standard tasks covered by existing components, this speeds up the modelling task considerably. Furthermore, since programs are represented by graph structures – with nodes describing the functions, and the wires connecting them describing the data that gets transferred between them – it is easy to see which parts of the algorithm are dependent upon others, and thus, where the changes are propagated to. However, this is only true for small algorithms, which are a rare find in visual-AD descriptions [7]. Therefore, despite solving part of the existing problems – which explains the growing popularity of this paradigm in the community – others have emerged, such as its inability to deal with more complex and larger-scale AD solutions [5,8,9]. 

In sum, AD remains challenging for most architects and a far cry from the representation methods they typically use. Human comprehension relies on concrete instances to create mental models of complex concepts [10]. Contrastingly, AD, either visual or textual, operates at a highly abstract level. This grants it its flexibility but also hinders its comprehension. 

Algorithmic Abstractness Vs Model Concreteness 

Abstraction can be regarded as the process of removing detail from a representation and keeping only the relevant features [11]. Some authors believe abstraction improves productivity: it not only focuses on the “big idea” or problem to solve [12] but also triggers creative thinking due to its vagueness, ambiguity, and lack of clarity [13].  

Abstraction in architecture can be traced back at least as far as classical antiquity. Architectural treatises, such as Vitruvius’ “Ten Books on Architecture” [14], are prime examples of abstract representations because they intend to convey not specific design instances, but rather design norms that are applicable to many design scenarios. However, the human brain is naturally more attracted to graphical explanations than textual ones [15–17], a tendency that is further accentuated in a field with a highly visual culture such as architecture. For that reason, even the referred treatises were eventually illustrated after the birth of the printing press [18]. 

The algorithmic nature of AD motivates designers to represent their ideas in an abstract manner, focusing on the concept and its formal definition. This sort of representation provides great flexibility to the design process, as a single expression of an idea can encompass a wide range of instances that match that idea, i.e., a design space. Contrariwise, most representation methods, including CAD and BIM, compel designers to rapidly narrow down their intentions towards one concrete instance, on account of the labour required to maintain separate representations for each viable alternative. 

In sum, abstraction gives AD flexibility and the ability to solve complex problems, but it also makes it harder to understand. Abstraction is especially relevant when dealing with mathematical concepts, such as recursion or parametric shapes; nature-inspired processes, such as randomness; and performance-based design principles, such as design optimisation. It is also critical when developing and fabricating unconventional design solutions, whose geometric complexity requires a design method with a higher level of flexibility and accuracy. Sadly, these are also the hardest concepts to grasp without concrete instances and visual aid. 

Nevertheless, the described comprehension barrier, apparently imposed by the abstract-concrete dichotomy, is more obvious when the AD descriptions are independent entities with little to no connection to the outcomes they produce. Figure 1 represents the current conception of AD: there is a parametric algorithm, representing a design space, which can generate a series of design models when specific parameters are provided. We propose to overthrow this notion by including the outcomes of the algorithm in the design process itself, changing the traditional flow of design creation to accommodate more design workflows and comprehension approaches.   

Figure 1 – AD workflow – an algorithm, representing a design space, generates a digital model for each design instance. 

Algorithmic Representation Space 

AD descriptions have an abstract nature, which is part of the reason they prove so beneficial to the architectural design process. However, when it comes to comprehending an AD – i.e., creating a mental model of the design space it represents – this feature becomes a burden. Human cognition seems to rely heavily on the accumulation of concrete examples to form a more abstract picture [10]. For this reason, we advocate that, for a better comprehension of an AD, the algorithms themselves do not suffice.  

This research proposes a new way to represent algorithmic descriptions that aids the development and understanding of AD projects. Under the name of Algorithmic Representation Space (ARS), this concept encompasses not only the algorithm but also its outcomes and the mechanisms that allow for the understanding of the design space it represents. AD descriptions stand to benefit significantly from the concreteness of the outputs they generate, i.e., the digital models. If we consider the models as part of the AD representation, we reduce its level of abstraction and increase its understandability, approximating it to the visual nature of human understanding. Nevertheless, we must also smooth its integration in more traditional design workflows, helping architects who still develop their models manually in digital design tools or are forced to use pre-existing models. Accordingly, the proposed ARS also enables the use of already existing digital models as starting points to arrive at an algorithmic description. 

There are two core elements in the ARS (Figure 2), the algorithm and the model. The algorithm represents a design space in a parametric abstract way, which makes the multiple design alternatives it represents difficult to perceive. Contrastingly, each model represents an instance of a design space in a static but concrete way. Combining the former’s flexibility with the latter’s perceptibility is therefore critical for the success of algorithmic representation. For conceptual reasons, the presented illustration of the ARS levels the two elements. Nevertheless, one must keep in mind that the algorithm can generate potentially infinite digital models, and the concept holds for all of them.  

We consider two entry points into the ARS: programming and modelling. Each will allow architects to traverse the ARS; in the former case, from algorithm to model, by running the instructions in the algorithm to generate a model; and in the latter, from model to algorithm, by extracting an algorithmic description capable of generating the design instance and then refactoring that description to make it parametric as well. In either case, it is important the ARS contemplates the visualisation of these algorithm-model relationships. Therefore, we propose including techniques such as traceability in any ARS. In the following section, we will use a case study, the Reggio Emilia Train Station by Santiago Calatrava, to illustrate the ARS and each of the proposed principles. 

Figure 2 – Building blocks of the ARS. 

Programming 

The typical AD process entails the creation of a parametric description that abstractly defines a design space according to the boundaries set by the architect (Figure 3). The parametricity of this description, or the size of the design space it represents, varies greatly with the design intent and the way it is implemented (e.g., degrees of freedom, rules, and constraints). By instantiating the parameters in the algorithm, the architect specifies instances of the design space, whose visualisation can be achieved by generating them in a digital design tool, such as a CAD, BIM, or game engine (Figure 3 – running the algorithm). Figure 4 presents several variations of the Reggio Emilia station achieved by running the corresponding AD description with varying input parameters, namely with a different number of beams, different beam sizes, and different amplitudes and phases of the sinusoidal movement. 

Given the flexibility of this approach, the process of developing AD descriptions tends to be a very dynamic one, with the architect repeatedly generating instances of the design to assess the impact of the changes made at each stage. Consciously or not, architects already work in a bidirectional iterative way when using AD. However, this workflow can also greatly benefit from a more obvious showcasing of the existing relations between algorithm and model. Traceability mechanisms allow precisely for the visual disclosure of these relations (i.e., which instruction/component generated which geometry), and several AD tools support them already. 

A picture containing timeline

Description automatically generated
Figure 3 – Entering the ARS by programming. 
Figure 4 – Parametric variations of the Reggio Emilia station, with different numbers and sizes of beams, and different amplitudes and signs of the sinusoidal movement. 

Creating Models 

AD is not meant to replace other design approaches but, instead, to interoperate with them. This interoperability is important, to take advantage of the investment made into those well-established representation methods such as CAD and BIM, especially for projects where digital models already exist or are still being produced. Therefore, the second entry point to the ARS is the conversion of an existing digital model of a design into an AD program. This might be necessary, for instance, when we wish to optimise it for new uses and/or to comply with new standards [19]. This process entails crossing the ARS in the opposite direction to that described in the previous section (Figure 5). 

To convert a digital model into an AD description, there are two main steps: extraction and refactoring. Extraction entails the automatic generation of instructions that can reproduce an exact copy of the model being extracted. The resulting AD description, however, is non-parametric and of difficult comprehension. This is where refactoring comes in [20,21], a technique that helps to improve the AD description, increasing its readability and parametricity. While the first task can be almost entirely automated, and is currently partially supported by some AD tools, the second part depends heavily on the architect’s design intent and, thus, will always be a joint effort between man and machine. In either case, it is important that the ARS adapts to the multiplicity of digital design tools and representation systems that architects often use during their design process. They can use, for instance, 3D modelling tools, such as CADs or game engines, to geometrically explore their designs more freely, or BIM tools to enrich the designs with construction information and to produce technical documentation.  

Figure 5 – Entering the ARS through modelling. 

Navigating the ARS 

As mentioned in the previous section, there are two main elements in the ARS: algorithms abstractly describing design spaces and digital models representing concrete instances of those design spaces. Either one can be accessed from either end of the spectrum, i.e., by programming and running the algorithm to generate digital models, or by manually modelling designs and then converting them into an algorithm. To allow for this bidirectionality between the two sides, the ARS relies on three main mechanisms: (a) traceability, (b) extraction, and (c) refactoring. The first allows the system to expose the existing relationships between algorithm and model in a visual and interactive way for a better comprehension of the design intent. The latter two allow us to traverse the ARS from model to algorithm, a less common crossing but an essential one, nevertheless. The following sections describe these three mechanisms in detail. 

Traceability 

For a proper comprehension of ADs, architects must construct a mental model of the design space, comprehending the impact each part of the algorithm has in each instance of the design space. To that end, a correlation must be ever present between the two core elements of the ARS – algorithm and model – matching the abstract representation with its concrete realisation. Traceability establishes relationships amongst the instructions that compose the algorithm and the corresponding geometries in the digital model. This is particularly relevant when dealing with complex designs, as it allows architects to understand which parts of the algorithm are responsible for generating which parts of the model.  

With traceability, users can select parts of the algorithm or parts of the model and see the corresponding parts highlighted in the other end. Grasshopper for Rhinoceros 3D and Dynamo for Revit, two visual AD tools, present unidirectional traceability mechanisms from the algorithm to the model. Figure 6 shows this feature at play in Grasshopper: users select any component on the canvas and the corresponding geometry is highlighted in the visualised model. 

Diagram

Description automatically generated
Figure 6 – Traceability in visual AD tools – the case of Grasshopper. 

Regarding bidirectional traceability, there are already visual AD tools that support it, such as Dassault Systèmes’ xGenerative Design tool (xGen) for Catia and Bentley’s Generative Components, as well as textual AD tools, such as Rosetta [22], Luna Moth [23], and Khepri [24]. Figure 7 shows the example of Khepri, where the user selects either instructions in the algorithm or objects in the model and the corresponding part is highlighted in the model or algorithm, respectively. Programming In the Model (PIM) [25], a hybrid programming tool, offers traceability between the three existing interactive windows: one showing the model, another the visual AD description, and a third showing the equivalent textual AD description. 

Unfortunately, traceability is a computationally intensive feature that hinders the tools’ performance with complex AD programs – especially model-to-algorithm traceability, which explains why some commercial visual-based AD tools avoid it. Those that provide it inevitably experience a decrease in performance as the model grows. All referred text-based and hybrid options are academic works, built and maintained as proof of concept and not as commercial tools, which explains their acceptance of the imposed trade-offs. A possible solution for this problem is to allow architects to decide when to use this feature and only switch it on when the support provided compensates for the computational overhead [26]. In fact, traceability-on-demand is Khepri’s current approach to the problem. 

Text

Description automatically generated with low confidence
Figure 7 – Traceability in textual AD tools – the case of Khepri. 

Extraction 

Extraction is the automatic conversion of a digital model into an algorithm that can faithfully replicate it. Previous studies [27,28] focused on the generation of 3D models from architectural plans or on the conversion of CAD to BIM models, using heuristics and manipulation of geometric relations. Sadly, the result is not an AD description, but rather another model, albeit more complex and/or informed. One promising line of research is the use of the probabilistic and neural-based machine learning techniques (e.g., convolutional or recurrent neural networks) that address translation from images to textual descriptions, [29] but further research is needed to generate algorithmic descriptions. 

The main problems with extracting a parametric algorithm lie, first, in the assumptions the system would need to make while reading a finished model: for instance, distinguishing whether two adjacent volumes are connected by chance or intentionally and, if the latter, deciding if such connection should constitute a parametric restriction of that model or not. Secondly, it is nearly impossible to devise a system that can consider the myriad of possible geometrical entities and semantics available in architectural modelling tools. 

Some modelling tools that favour the VP paradigm avoid this problem by placing the responsibility on the designer from the very start, restricting the modelling workflow and forcing the designer to provide the missing information. In xGen and Generative Components, the 3D model and the visual algorithm are in sync, meaning changes made in either one are reflected in the other. PIM presents a similar approach, extending the conversion to the textual paradigm as well, although it was only tested with simple 2D examples.  

In practice, these tools offer real-time conversion from the model to the algorithm. However, either solution requires the model to be parametric from the start. Every modelling operation available in these tools has a pre-set correspondence to a visual component, and designers must build their models following the structured parametric approach imposed by each tool, almost as if they were in fact constructing an algorithm but using a modelling interface. As such, the system is gathering the information it needs to build parametric relations from the very beginning. This explains why neither xGen, nor Generative Components, nor PIM, can take an existing model created in another modelling software or following other modelling rules and extract an algorithmic description from it. 

This problem has also been addressed in the TP field and promising results have been achieved in the conversion of bi-dimensional shapes into algorithms [24,30]. However, further work is required to recognise 3D shapes, namely 3D shapes of varying semantics, since architects can use a myriad of digital design tools to produce their models, such as CADs, BIMs, or game engines. Figure 8 presents an ideal scenario, where the ARS is able to extract an algorithm that can generate an identical model to that being extracted. 

In either case, even if we arrive at the extraction of the most common 3D elements any time soon, the resulting algorithm will only accurately represent the extracted model, and it will comprise a low-level program, which is very hard for humans to understand. To make the algorithm both understandable and parametric, it needs to be further transformed according to the design intent envisioned by the architect. Increasing the algorithm’s comprehension level and the design space it represents is the goal of refactoring. 

Diagram

Description automatically generated with medium confidence
Figure 8 – Extraction process – on the left the digital model, and on the right the sequence of instructions resulting from the extraction process. 

Refactoring 

Refactoring (or restructuring) is commonly defined as the process of improving the structure of an existing program without changing its semantics or external behaviour [20]. There are already several semi-automatic refactoring tools [21] that help to improve the readability and maintenance of algorithmic descriptions and increase their efficiency and abstraction level. Refactoring is an essential follow-up to an extraction process, since the latter returns a non-parametric algorithm that is difficult to decipher. 

Figure 9 shows an example of a refactoring process that could take place with the algorithm extracted in Figure 8. The extracted algorithm contains numerous instructions, each responsible for generating a beam between two spatial locations defined by XYZ coordinates. It is not difficult to infer the linear variations presented in the first and fourth highlighted columns, which correspond to the points’ X values. To infer the sinusoidal variation in the remaining values, however, more complex curve-fitting methods would have to be implemented [31]. 

In either case, refactoring tools seldom work alone, meaning that a lot of user input is required. This is because there is rarely a single correct way of structuring algorithms, and the user must choose which methods to implement in each case. Refactoring tools, beyond providing suggestions, guarantee that the replacements are made seamlessly and do not change the algorithm’s behaviour. When trying to increase parametric potential, even more input is required, since it is the architect who must decide the degrees of freedom shaping the design space. 

In our example (Figure 9), the refactored algorithm shown below has a better structure and readability but is still in an infant state of parametricity. As a next stage, we could start by replacing the numerical values proposed by the refactoring tool with variable parameters to allow for more variations of the sinusoidal movement. 

Discussion and Conclusion 

Architecture is an ancient profession, and the means used to produce architectural entities have constantly changed, not only integrating the latest technological developments, but also responding to new design trends and representation needs. Architects have long adopted new techniques to improve the way they represent designs. However, while, for centuries, this caused gradual changes in the architectural design practice, with the more accentuated technological development witnessed since the 60s, these modifications have become more evident. The emergence of personal computers, followed by the massification of Computer-Aided Design (CAD) and Building Information Modelling (BIM) tools, allowed architects to automate their previously paper-based design processes [1], shaping the way they approached design issues [32]. However, these tools did little to change the way designs were represented, only making their production more efficient. It did not take long for this scenario to rapidly evolve with the emergence of more powerful computational design paradigms, such as Algorithmic Design (AD). Despite being more abstract and thus less intuitive, this design representation method is more flexible and empowers architects’ creative processes. 

Given its advantages for architectural design practice, AD should be a complement to the current means of representation. However, to make AD more appealing for a wider audience and allow architects to make the most of it, we must lower the existing barriers by approximating AD to the visual and concrete nature of architectural thinking. To that end, we proposed the Algorithmic Representation Space (ARS), a representation approach that aims to replace the current one-directional conception of AD (going from algorithms to digital models) with a bidirectional one that additionally allows architects to arrive at algorithms starting from digital models. Furthermore, the ARS encompasses as means of representation not only the algorithmic description but also the digital model that results from it, as well as the mechanisms that aid the comprehension of the design space it represents.  

A picture containing table

Description automatically generated
Figure 9 – Refactoring process – the sequence of extracted instructions (on top) is converted onto a more comprehensible and parametric algorithm (on the bottom). 

The proposed system is based on two fundamental elements – the algorithm and the digital model – and architects have two ways of arriving at them – programming and modelling. Considering the first case, programming, the ARS supports the development of algorithms and the subsequent visualisation of the design instances they represent by running the algorithm with different parameters. In the second case, modelling, the ARS supports the conversion of digital models into algorithms that reproduce them. The first scenario allows AD representations to benefit from the visual nature of digital design tools, reducing the innate abstraction of algorithms and obtaining concrete instances of the design space that are more perceptible to the human mind. The second case enables the conversion of a concrete representation of a design instance into an abstract representation of a design space, i.e., a parametric description that can generate possible variations of the original design, benefiting from algorithmic flexibility and expressiveness in future design tasks.  

To allow for this bidirectionality, the ARS relies on three main mechanisms: (a) traceability, (b) extraction, and (c) refactoring. Traceability addresses the non-visual nature of the first process – programming – by displaying the relationships between the algorithm and the digital model. Extraction and refactoring address the complexity of the second process – going from model to algorithm – the former entailing the extraction of the algorithmic instructions that, when executed, generate the original design solution, and the latter solving the lack of parametricity and perceptibility of the extracted algorithms by helping architects restructure them. The result is a new representation paradigm with enough (1) expressiveness to successfully represent architectural design problems of varying complexities; (2) flexibility to parametrically manipulate the resulting representations; and (3) concreteness to easily and quickly comprehend the design space embraced.  

The proposed ARS intends to motivate a more widespread adoption of AD representation methods. However, it is currently only a theoretical outline. To reach its goal, the proposed system must gain a practical character. As future work, we will focus on applying and evaluating the ARS in large-scale design scenarios, while retrieving user feedback from the experience. 

Acknowledgments 

This work was supported by national funds through Fundação para a Ciência e a Tecnologia (FCT) (references UIDB/50021/2020, PTDC/ART-DAQ/31061/2017) and PhD grants under contract of FCT (grant numbers SFRH/BD/128628/2017, DFA/BD/4682/2020). 

References 

[1] S. Abubakar and M. Mohammed; Halilu, “Digital Revolution and Architecture: Going Beyond Computer-Aided Architecture (CAD)”. In Proceedings of the Association of Architectural Educators in Nigeria (AARCHES) Conference (2012)., 1–19.  

[2] R. Oxman, “Thinking difference: Theories and models of parametric design thinking”. Design Studies (2017), 1–36. DOI:http://doi.org/10.1016/j.destud.2017.06.001 

[3] K. Terzidis, “Algorithmic Design: A Paradigm Shift in Architecture ?” In Proceedings of the 22nd Education and research in Computer Aided Architectural Design in Europe (eCAADe) Conference, Copenhagen, Denmark (2004), 201–207. 

[4] I. Caetano, L. Santos, and A. Leitão, “Computational design in architecture: Defining parametric, generative, and algorithmic design.” Frontiers of Architectural Research 9, 2 (2020), 287–300. DOI:https://doi.org/10.1016/j.foar.2019.12.008 

[5] P. Janssen, “Visual Dataflow Modelling: Some thoughts on complexity”. In Proceedings of the 32nd Education and research in Computer Aided Architectural Design in Europe (eCAADe) Conference, Newcastle upon Tyne, UK (2014), 305–314 

[6] E. Lee and D. Messerschmitt, “Synchronous data flow”. Proceedings of the IEEE 75, 9 (1987), 1235–1245. DOI:https://doi.org/10.1109/PROC.1987.13876 

[7] D. Davis, “Modelled on Software Engineering: Flexible Parametric Models in the Practice of Architecture”. PhD Dissertation, RMIT University (2013). 

[8] A. Leitão and L. Santos, “Programming Languages for Generative Design: Visual or Textual?” In Proceedings of the 29th Education and research in Computer Aided Architectural Design in Europe (eCAADe) Conference, Ljubljana, Slovenia (2011),139–162. 

[9] M Zboinska, “Hybrid CAD/E Platform Supporting Exploratory Architectural Design”. CAD Computer Aided Design 59, (2015), 64–84. DOI:https://doi.org/10.1016/j.cad.2014.08.029 

[10] D. Rauch, P. Rein, S. Ramson, J. Lincke, and R. Hirschfeld, “Babylonian-style Programming: Design and Implementation of an Integration of Live Examples into General-purpose Source Code”. The Art, Science, and Engineering of Programming, 3, 3 (2019), 9:1-9:39. DOI:https://doi.org/10.22152/programming-journal.org/2019/3/9 

[11] H. Abelson, G.J. Sussman, and J. Sussman (1st ed. 1985), Structure and Interpretation of Computer Programs  (Cambridge, Massachusetts, and London, England: MIT Press, 1996) DOI:https://doi.org/10.1109/TASE.2008.40 

[12] B. Cantrell and A. Mekies (Eds.), Codify: Parametric and Computational Design in Landscape Architecture. (Routledge, 2018). DOI:https://doi.org/10.1017/CBO9781107415324.004 

[13] A. Al-Attili and M. Androulaki, “Architectural abstraction and representation”. In Proceedings of the 4th International Conference of the Arab Society for Computer Aided Architectural Design, Manama (Kingdom of Bahrain) (2009), 305–321. 

[14] M. Vitruvius, The Ten Books on Architecture. (Cambridge & London, UK: Harvard University Press & Oxford University Press, 1914). 

[15] K. Zhang, Visual languages and applications. (Springer Science + Business Media, 2007). 

[16] N. Shu, 1986, “Visual Programming Languages: A Perspective and a Dimensional Analysis”. In Visual Languages. Management and Information Systems, SK. Chang, T. Ichikawa and P.A Ligomenides (eds.). (Boston, MA: Springer, 1986). DOI: https://doi.org/10.1007/978-1-4613-1805-7_2 

[17] E. Do and M. Gross, “Thinking with Diagrams in Architectural Design”. Artificial Intelligence Review. 15, 1 (2001), 135–149. DOI:https://doi.org/10.1023/A:1006661524497 

[18] M. Carpo, The Alphabet and the Algorithm. (Cambridge, Massachusetts: MIT Press, 2011). 

[19] I. Caetano, G. Ilunga, C. Belém, R. Aguiar, S. Feist, F. Bastos, and A. Leitão, “Case Studies on the Integration of Algorithmic Design Processes in Traditional Design Workflows”. In Proceedings of the 23rd International Conference of the Association for Computer-Aided Architectural Design Research in Asia (CAADRIA), Hong Kong (2018), 129–138. 

[20] M. Fowler, Refactoring: Improving the Design of Existing Code. (Reading, Massachusetts: Addison-Wesley Longman, 1999) 

[21] T. Mens and T. Tourwe, “A survey of software refactoring”. IEEE Transactions on Software Engineering. 30, 2 (2004), 126–139. DOI:https://doi.org/10.1109/TSE.2004.1265817 

[22] A. Leitão, J. Lopes, and L. Santos, “Illustrated Programming”. In Proceedings of the 34th Annual Conference of the Association for Computer Aided Design in Architecture (ACADIA), Los Angeles, California, USA (2014), 291–300.  

[23] P. Alfaiate, I. Caetano, and A. Leitão, “Luna Moth Supporting Creativity in the Cloud”. In Proceedings of the 37th Annual Conference of the Association for Computer Aided Design in Architecture (ACADIA), Cambridge, MA (2017), 72–81. 

[24] M. Sammer, A. Leitão, and I. Caetano, “From Visual Input to Visual Output in Textual Programming”. In Proceedings of the 24th International Conference of the Association for Computer-Aided Architectural Design Research in Asia (CAADRIA), Wellington, New Zealand (2019), 645–654. 

[25] M. Maleki and R. Woodbury, “Programming in the Model: A new scripting interface for parametric CAD systems:”. In Proceedings of the Annual Conference of the Association for Computer Aided Design in Architecture (ACADIA), Cambridge, Canada (2013), 191–198. 

[26] R. Castelo-Branco, A. Leitão, and C. Brás, “Program Comprehension for Live Algorithmic Design in Virtual Reality”. In Companion Proceedings of the 4th International Conference on the Art, Science, and Engineering of Programming (<Programming’20> Companion), ACM, New York, NY, USA, Porto, Portugal, (2020), 69–76. DOI:https://doi.org/10.1145/3397537.3398475 

[27] L. Gimenez, J. Hippolyte, S. Robert, F. Suard, and K. Zreik, “Review: Reconstruction of 3D building information models from 2D scanned plans”. Journal of Building Engineering 2, (2015), 24–35. DOI:https://doi.org/10.1016/j.jobe.2015.04.002 

[28] P. Janssen, K. Chen, and A. Mohanty, “Automated Generation of BIM Models”. In Proceedings of the 34th Education and research in Computer Aided Architectural Design in Europe (eCAADe) Conference, Oulu, Finland, (2016) 583–590. 

[29] J. Donahue, L. Hendricks, M. Rohrbach, S. Venugopalan, S. Guadarrama, K. Saenko, and T. Darrell, “Long-Term Recurrent Convolutional Networks for Visual Recognition and Description”. IEEE Transactions on Pattern Analysis and Machine Intelligence. 39, 4 (2017), 677–691. DOI:https://doi.org/10.1109/TPAMI.2016.2599174 

[30] A. Leitão and S. Garcia., “Reverse Algorithmic Design”. In Proceedings of Design Computing and Cognition (DCC’20) Conference, Georgia, Atlanta, USA (2021). p. 317–328. DOI: https://doi.org/10.1007/978-3-030-90625-2_18 

[31] P. Mogensen and A. Riseth, “Optim: A mathematical optimization package for Julia”. Journal of Open Source Software. 3, 24 (2018), 615. DOI:https://doi.org/10.21105/joss.00615 

[32] T. Kotnik, “Digital Architectural Design as Exploration of Computable Functions”. International Journal of Architectural Computing 8, 1 (2010), 1–16. DOI:https://doi.org/10.1260/1478-0771.8.1.1 

Suggest a Tag for this Article
Figure 5 Fun Palace in London before Demolition [61] 
Figure 5 Fun Palace in London before Demolition [61] 
Architectural Authorship in “the Last Mile”
Architectural Authorship, automation, digitalisation, Fun Palace, Leon Battista Alberti, mass-customisation, the Last Mile
Yixuan Chen

y.chen.20@alumni.ucl.ac.uk
Add to Issue
Read Article: 6594 Words

Introduction 

A loyal companion to the breakthroughs of artificial intelligence is the fear of losing jobs due to a robotic takeover of the labour market. Mary L. Gray and Siddharth Suri’s research on ghost work unveiled another possible future, where a “last mile” requiring human intervention would always exist in the journey towards automation. [1] The so-called “paradox of the last mile” has been exerting impacts on the human labour market across the industrial age, recurringly re-organising itself when absorbing marginalised groups into its territory. These groups range from child labourers in factories, to the “human computer” women of NASA, to on-demand workers from Amazon Mechanical Turk (MTurk). [2] Yet their strenuous efforts are often rendered invisible behind the ostensibly neutral algorithmic form of the automation process, creating “ghost work”. [3] 

Based on this concept of “the last mile”, this study intends to excavate how its paradox has influenced architectural authorship, especially during architecture’s encounters with digital revolutions. I will firstly contextualise “architectural authorship” and “the last mile” in previous studies. Then I will discuss the (dis)entanglements between “automation” and “digitalisation”. Following Antoine Picon and Nicholas Negroponte, I distinguish between the pre-information age, information age and post-information age before locating my arguments according to these three periods. Accordingly, I will study how Leon Battista Alberti, the Fun Palace, and mass-customised houses fail in the last mile of architectural digitalisation and how these failures affect architectural authorship. From these case studies, I challenge the dominant narrative of architectural authorship, either as divinity or total dissolution. In the end, I contend that it is imperative to conceive architectural authorship as relational and call for the involvement of multi-faceted agents in this post-information age. 

Academic Context 

Architectural Authorship in the Digital Age 

The emergence of architects’ authorial status can be dated back to Alberti’s De re aedificatoria, which states that “the author’s original intentions” should be sustained throughout construction. [4] Yet at the same time, those architects should keep a distance from the construction process. [5] It not only marks the shift from the artisanal authorship of craftsmen to the intellectual authorship of architects but also begets the divide between the authorship of architectural designs and architectural end products. [6] However, this tradition can be problematic in the digital age, when multi-layered authorship becomes feasible with the advent of mass-collaboration software and digital customisation technologies. [7] 

Based on this, Antoine Picon has argued that, despite attempts to include various actors by collaborative platforms such as BIM, architects have entered the Darwinian world of competition with engineers, constructors and existing monopolies, to maintain their prerogative authorship over the profession. [8] These challenges have brought about a shifting attention in the profession, from authorship as architects to ownership as entrepreneurs. [9] Yuan and Wang, on the other hand, call for a reconciliation of architectural authorship between regional traditions and technologies from a pragmatic perspective. [10] However, these accounts did not throw off the fetters of positioning architects as the centre of analysis. In the following article, I will introduce “the last mile”, a theory from the field of automation, to provide another perspective on the issues of architectural authorship. 

“The Last Mile” as Method 

The meaning of “the last mile” has changed several times throughout history. Metaphorically, it was used to indicate the distance between the status quo and the goal, in various fields, such as movies, legal negotiations, and presidential campaigns. [11] It was first introduced in the technology industry as “the last mile” of telecommunication, on which one of the earliest traceable records was written in the late 1980s. [12] Afterwards, “the last mile” of logistics began to be widely used in the early 2000s, following the dot-com boom of the late 90s that fuelled discussions of B2C eCommerce. [13] However, in this article, I will use “the last mile” of automation, a concept from the recent “AI revolution” since 2010, to reconsider architectural authorship. [14] In this context, “the last mile” of automation refers to “the gap between what a person can do and what a computer can do”, as Gray and Suri defined in their book. [15] 

I employ this theory to discuss architectural authorship for two purposes.  

1. Understanding the paradox of automation can be of assistance in understanding how architectural authorship changes along with technological advancements. Pasquinelli and Joler suggest that “automation is a myth”, because machines have never entirely operated by themselves without human assistance, and might never do so. [16] Subsequently, here rises the paradox that “the desire to eliminate human labour always generates new tasks for humans” and this shortcoming “stretched across the industrial era”. [17] Despite being confined within the architectural profession, architectural authorship is subject to change in parallel with the alterations of labour tasks. 

2. I contend that changes in denotations of “the last mile” signal turning points in both digital and architectural history. As Figure 1 suggests, in digital history, the implication of the last mile has changed from the transmission of data to the analysis of data, and then to automation based on data. The former change was in step with the arrival of the small-data environment in the 1990s and the latter corresponds with the leap towards the big-data environment around 2010. [18] In a similar fashion, after the increasing availability of personal computers after the 90s, the digital spline in architecture found formal expression and from around 2010 onwards, spirits of interactivity and mass-collaboration began to take their root in the design profession. [19] Therefore, revisiting the digital history of architecture from the angle of “the last mile” can not only provide alternative readings of architectural authorship in the past but can also be indicative of how the future might be influenced. 

Figure 1 Changes of Meanings for “the Last Mile” in Digital History, and Digital Turns in Architectural History. 

Between Automation and Digitalisation 

Before elucidating how architectural authorship was changed by the arrival of the automated/digital age, it is imperative to distinguish two concepts mentioned in the previous section – automation and digitalisation. To begin with, although automation first came to use in the automotive industry in 1936 to describe “the automatic handling of parts”, what this phrase alludes to has long been rooted in history. [20] As Ekbia and Nardi define, automation essentially relates to labour-saving mechanisms that reduce the human burden by transferring it to machines in labour-requiring tasks, including both manual and cognitive tasks. [21] Despite its use in human history, it was not until the emergence of digital computers after WWII that its meaning became widely applicable. [22] The notion of computerised automation was put forward by computer scientist Michael Dertouzos in 1979, highlighting its potential for tailoring products on demand. [23] With respect to cognitive tasks, artificial intelligence that mimics human thinking is employed to tackle functions concerning “data processing, decision making, and organizational management”. [24] 

Digitalisation, on the other hand, is a more recent concept engendered by the society of information in the late 19th century, according to Antoine Picon. [25] This period was later referred to as the Second Industrial Revolution, when mass-production was made possible by a series of innovations, including electrical power, automobiles, and the internal combustion engine. It triggered what Beniger called the “control revolution” – the volume of data exploded to the degree that it begot revolutions in information technology. [26] Crucial to this revolution was the invention of digital computing, which brought about a paradigm shift in the information society. [27] It has changed “the DNA of information” in the sense that, as Nicholas Negroponte suggests, “all media has become digital”, by converting information from atoms to bits. [28] In this sense, Negroponte distinguishes between the information age, which is based on economics of scale, and the post-information age, founded on personalisation. [29] 

It can be observed that automation and digitalisation are intertwined in multiple ways. Firstly, had there been no advancement in automation during the Second Industrial Revolution, there would be no need to develop information technology, as data would have remained at a manageable level. Secondly, the advent of digital computers has further intermingled these two concepts to the extent that, in numerous cases, for something to be automated, it needs first to be digitalised, and vice versa. In the architectural field alone, examples of this can be found in cybernetics in architecture and planning, digital fabrication, smart materials, and so on. Hence, although these two terms are fundamentally different – most obviously, automation is affiliated with the process of input and output, and digitalisation relates to information media – the following analysis serves with no intention to differentiate between the two. Instead, I discuss “the last mile” in the context of reciprocity between these two concepts. After all, architecture itself is at the convergence point between material objects and media technologies. [30] 

Leon Battista Alberti: Before the Information Age 

Digitalisation efforts made by architects, however, appeared to come earlier than such attempts made in industrial settings of the late 19th century. This spirit can be traced back to Alberti’s insistence on identicality during information transmission, by compressing two-dimensional and three-dimensional information into digits – which is exemplified by Descriptio Urbis Romae and De statua. [31] In terms of architecture, as mentioned previously, he positions built architecture as an exact copy of architects’ intention. [32] This stance might be influenced by his views on painting. First, he maintains that all arts, including architecture, are subordinate to paintings, where “the architraves, the capitals, the bases, the columns, the pediments, and all other similar ornaments” came from. [33] Second, in his accounts, “the point is a sign” that can be seen by eyes, the line is joined by points, and the surface by lines. [34] As a result, the link between signs and architecture is established through paintings since architecture is derived from paintings and paintings from points/signs.  

Furthermore, architecture can also be built according to the given signs. In Alberti’s words, “the whole art of buildings consists in the design (lineamenti), and in the structure”, and by lineamenti, he means the ability of architects to find “proper places, determinate numbers, just proportion and beautiful order” for their constructions. [35] It can be assumed that, if buildings are to be identical to their design, then, to begin with, there must be “determinate numbers” to convey architects’ visions by digital means – such as De statua (Fig. 2). Also, in translating the design into buildings, these numbers and proportions should be unbothered by any distortions as they are placed in actual places – places studied and measured by digital means, just like Descriptio Urbis Romae (Fig. 2). 

Although the Albertian design process reflects the spirit of the mechanical age, insisting on the identicality of production, it can be argued that his pursuit of precise copying was also influenced by his pre-modern digital inventions being used to manage data. [36] Therefore, what signs/points mean to architecture for Alberti can be compared to what bits mean to information for Negroponte, as the latter is composed of the former and can be retrieved from the former. Ideally, this translation process can be achieved by means of digitalisation. 

Figure 2 Descriptio Urbis Romae (Left) and De statua (Right)37 

Yet it is obvious that the last mile for Alberti is vastly longer than that for Negroponte. As Giorgio Vasari noted in the case of Servite Church of the Annunziata, while Alberti’s drawings and models were employed for the construction of the rotunda, the result turned out to be unsatisfactory, and the arches of nine chapels are falling backwards from the tribune due to construction difficulties. [38] Also, in the loggia of the Via della Vigna Nuova, his initial plan to build semi-circular vaults was aborted because of the inability to fulfil this shape on-site. [39] These two cases suggest that the allographic design process – employing precise measurements and construction – which heralded the modern digital modelling software and 3D-printing technologies, was deeply problematic in Alberti’s time. 

This problem was recognised by Alberti himself in his De re aedificatoria, when he wrote that to be “a wise man”, one cannot stop in the middle or at the end of one’s work and say, “I wish that were otherwise”. [40] In Alberti’s opinion, this problem can be offset by making “real models of wood and other substances”, as well as by following his instruction to “examine and compute the particulars and sum of your future expense, the size, height, thickness, number”, and so on. [41] While models can be completed without being exactly precise, architectural drawings should achieve the exactness measured “by the real compartments founded upon reason”. [42] According to these descriptions, the design process conceived by Alberti can be summarised as Figure 3. 

Figure 3 Albertian Design Process 

If, as previously discussed, architecture and its context can be viewed as an assembly of points and signs, the Albertian design process can be compared to how these data are collected, analysed and judged until the process reaches the “good to print” point – the point when architects exit and construction begins. Nonetheless, what Vasari has unveiled is that the collection, analysis and execution of data can fail due to technological constraints, and this failure impedes architects from making a sensible judgement. Here, the so-called “technological constraints” are what I consider to be “the last mile” that can be found across the Albertian design process. As Vasari added, many of these technological limitations at that time were surmounted with the assistance of Salvestro Fancelli, who realised Alberti’s models and drawings, and a Florentine named Luca, who was responsible for the construction process. [43] Regardless of these efforts, Alberti remarked that only people involved in intellectual activities – especially mathematics and paintings – are architects; the opposite of craftsmen. [44] Subsequently, the challenges of confronting “the last mile” are removed from architects’ responsibilities through this ostensibly neutral design process, narrowing the scope of who is eligible to be called an architect. The marginalisation of artisanal activities, either those of model makers, draughtsmen or craftsmen, is consistent with attributing the laborious last mile of data collection, analysis and execution – measuring, model making, constructing – exclusively to their domain. 

While the division of labour is necessary for architecture, as John Ruskin argued, it would be “degraded and dishonourable” if manual work were less valued than intellectual work. [45] For this reason, Ruskin praised Gothic architecture with respect to the freedom granted to craftsmen to execute their own talents. [46] Such freedom, however, can be expected if the last mile is narrowed to the extent that, through digitalisation/automation, people can be at the same time both architects and craftsmen. Or can it? 

Fun Palace: At the Turn of the Information and Post-Information Age 

Whilst the Albertian allographic mode of designing architecture has exerted a profound impact on architectural discipline due to subsequent changes to the ways architects have been trained, from the site to the academy, this ambition of separating design from buildings was not fulfilled, or even agreed upon among architects, in the second half of the 20th century. [47] Besides, the information age on the basis of scale had limited influences on architectural history, except for bringing about a new functional area – the control room. [48] Architecture’s initial encounters with the digital revolution after Alberti’s pre-modern technologies can be traced back to the 1960s, when architects envisaged futuristic cybernetic-oriented environments. [49] Different from Alberti’s emphasis on the identicality of information – the information per se – this time, the digitalisation and information in architecture convey a rather different message. 

Gorden Pask defined cybernetics as “the field concerned with information flows in all media, including biological, mechanical, and even cosmological systems”. [50] By emphasising the flow of data – rather than the information per se – cybernetics distinguishes itself in two aspects. Firstly, it is characterised by attempts of reterritorialization – it breaks down the boundaries between biological organisms and machines, between observers and systems, and between observers, systems and their environments, during its different development phases – which are categorised respectively as first-order cybernetics (1943-1960), second-order cybernetics (1960-1985) and third-order cybernetics (1985-1996). [51]  

Secondly, while data and information became secondary to their flow, catalysed by technologies and mixed realities, cybernetics is also typified by the construction of frameworks. [52] The so-called framework was initially perceived as a classifying system for all machines, and later, after computers were made more widely available and powerful, it began to be recognised as the computational process. [53] This thinking also leads to Stephen Wolfram’s assertion that the physical reality of the whole universe is generated by the computational process and is itself a computational process. [54] This is where the fundamental difference is between the Albertian paradigm and cybernetics, as the former is based on mathematical equations and the latter attempts to understand the world as a framework/computation. [55] Briefly, in cybernetics theory, information per se is subordinate to the flow of information and this flow can again be subsumed into the framework, which is later known as computational processes (Fig. 4). 

Figure 4 Information in Cybernetics Theory 

In Cedric Price’s Fun Palace, this hierarchical order resulted in what Isozaki described as “erasing architecture into system” after its partial completion (Fig. 5). [56] Such an erasure of architecture was rooted in the conceptual process, since the cybernetics expert in charge of the Fun Palace was Gordon Pask, who founded his theory and practice on second-order cybernetics. [57] Especially so, as considering that one major feature of second-order cybernetics is what Maturana and Varela termed “allopoiesis” – a process of producing something other than the system’s original component – it is understandable that if the system is architecture, then it would generate something different than architecture. [58] In the case of the Fun Palace, it was presupposed that architecture is capable of generating social activities, and that architects can become social controllers. [59] More importantly, Cedric Price rejected all that is “designed” and instead only made sketches of indistinct elements, diagrams of forces, and functional programs, rather than architectural details. [60] All these ideas, highlighting the potential in regarding architecture as the framework of computing – in contrast to seeing architecture as information – rendered the system more pronounced and set architecture aside. 

Figure 5 Fun Palace in London before Demolition61 

By rejecting architecture as pre-designed, Price and Littlewood strived to problematize the conventional paradigm of architectural authorship. They highlighted that the first and foremost quality of the space should be its informality, and that “with informality goes flexibility”. [62] This envisages user participation by rebuking fixed interventions by architects such as permanent structures or anchored teak benches. [63] In this regard, flexibility is no longer positioned as a trait of buildings but that of use, by encouraging users to appropriate the space. [64] As a result, it delineates a scenario of “the death of the author” in which buildings are no longer viewed as objects by architects, but as bodily experiences by users – architectural authorship is shared between architects and users. [65] 

However, it would be questionable to claim the anonymity of architectural authorship – anonymous in the sense of “the death of the author” – based on an insignificant traditional architectural presence in this project, as Isozaki did. [66] To begin with, Isozaki himself has remarked that in its initial design, the Fun Palace would have been “bulky”, “heavy”, and “lacking in freedom”, indicating the deficiency of transportation and construction technologies at that time. [67] Apart from the last mile to construction, as Reyner Banham explained, if the Fun Palace’s vision of mass-participation is to be accomplished, three premises must be set – skilful technicians, computer technologies that ensure interactive experiences and programmable operations, and a secure source of electricity connecting to the state grid. [68] While the last two concerns are related to technological and infrastructural constraints, the need for technicians suggests that, despite its claim, this project is not a fully automated one. The necessary involvement of human factors to assist this supposedly automated machine can be further confirmed in Price and Littlewood’s accounts that “the movement of staff, piped services and escape routes” would be contained within “stanchions of the superstructure”. [69] Consequently, if architects can extend their authorship by translating elements of indeterminacy into architectural flexibility, and users can be involved by experiencing and appropriating the space, it would be problematic to leave the authorship of these technicians unacknowledged and confine them within service pipes. [70] 

The authorship of the Fun Palace is further complicated when the content of its program is scrutinized. Price and Littlewood envisaged that people’s activities would feed into the system, and that decisions would be made according to this information. [71] During this feed-in and feedback process, human activities would be quantified and registered in a flow chart (Fig. 6). [72] However, the hand-written proposed list of activities in Figure 6 shows that human engagement is inseparable from the ostensibly automated flow chart. The arrows and lines mask human labours that are essential for observing, recognising, and classifying human activities. These tasks are the last mile of machine learning, which still requires heavy human participation even in the early 21st century. 

For instance, when, in 2007, the artificial intelligence project ImageNet was developed to recognise and identify the main object in pictures, developers found it impossible to increase the system’s accuracy by developing AI alone (and only assisting it when it failed). [73] Finally, they improved the accuracy of ImageNet’s algorithms by finding a “gold standard” of labelling the object – not from the developments of AI itself, but by using 49,000 on-demand workers from the online outsourcing platform MTurk to perform the labelling process. [74] This example suggests that if the automation promised by the Fun Palace is to be achieved, it is likely to require more than just the involvement of architects, users, and technicians. In the time of the Fun Palace’s original conception, the attempt was not fulfilled due to the impotence of computing technologies. Yet if such an attempt was to be made in the 2020s, it is likely that architectural authorship would be shared among architects, users, technicians, and ghost workers from platforms such as MTurk. 

Figure 6 Cybernetic Diagram (Left) and Proposed Activities (Right)75 

Returning to the topic of cybernetics, whilst cybernetic theories tend to redefine territories of the architectural system by including what was previously the other parts of the system – machines, observers, adaptive environments – the example of the Fun Palace has shown that this process of blurring boundaries would not be possible without human assistance, at least initially. The flow of information between these spheres would require human interventions to make this process feasible and comprehensible because, in essence, “the information source of machine learning (whatever its name: input data, training data or just data) is always a representation of human skills, activities and behaviours, social production at large”. [76] 

Houses of Mass-Customisation: In the Post-information Age 

Although cybernetics theories have metaphorically or practically influenced architectural discourse in multiple ways, from Metabolism and Archigram to Negroponte and Cedric Price, such impact was diminished after the 1970s, in parallel with the near-total banishment of cybernetics as an independent discipline in the in the academia. [77] After a long hibernation during “the winter of artificial intelligence”, architecture’s next encounter with digital revolutions happened in the 1990s. [78] It was triggered by the increasing popularity and affordability of personal computers – contrary to the expectations of cybernetics engineers, who back in the 1960s dreamt that computers would increase both in power and size. [79] These distinctive material conditions led to the underlying difference between the second-order cybernetics in the 1960s and architecture’s first digital turn in the 1990s. I contend that this distinction can be explained by comparing Turing’s universal machine with Deleuze’s notion of the “objectile”. 

As Stanley Mathews argued, the Fun Palace works in the same way as the universal machine. [80] The latter is a precursor of modern electronic computers, which can function as different devices – either as typewriters, drawing boards, or other machines – according to different codes they receive (Fig. 7). [81] Comparatively, “objectile” connotes a situation in which a series of variant objects is produced based on their shared algorithms (Fig. 8). [82] These products are so-called “non-standard series” whose key definition relates to their variance rather than form.83  

Figure 7 Simplified Diagram of the Universal Machine 
Figure 8 Non-standard Production 

While the universal machine seems to require more power to support its every change, an infinite one-dimensional tape on which its programmers can mark symbols of any instructions to claim its universality, non-standard production can operate on a smaller scale and under less demanding environments. [84] The emphasis on variance in non-standard production processes also indicates a shift of attention from the “process” underscored by second-order cybernetics towards the product of certain parametric models. When the latter is applied to architecture, the physical building regains its significance as the variable product. 

However, it does not mean a total cut-off between cybernetics and non-standard production. Since human-machine interactions are crucial for customising according to users’ input, I maintain that mass-customisation reconnects architecture with first-order cybernetics whilst resisting the notion of chaos and complexity intrinsic in second-order cybernetics.  

Figure 9 Flatwriter85 

Such correlation can be justified by comparing two examples. First, the visionary project Flatwriter (1967) by the Hungarian architect Yona Friedman proposed a scenario in which users can choose their preferred apartment plan from several patterns of spatial configurations, locations, and orientations. [86] Based on their preferences, they would receive optimised feedback from the system (Fig. 9). [87] This optimisation process would consider issues concerning access to the building, comfortable environments, lighting, communication, and so on. [88] Given that it rejects chaos and uncertainty by adjusting users’ selections for certain patterns of order and layout, this user-computer interaction system is essentially an application of first-order cybernetics, as Yiannoudes argued. [89] Contemporary open-source architectural platforms are based on the same logic. As the founder of WikiHouse argued, since the target group of mass-customisation is the 99 per cent who are constantly overlooked by the normative production of buildings after the retreat of state intervention, designing “normal” environments for them is the primary concern – transgression and disorder should be set aside. [90] As Figure 10 illustrates, similarly to Flatwriter, in theory, WikiHouse would pre-set design rules and offer design proposals according to calculations of the parametric model. [91] These rules would follow a “LEGO-like system”, which produces designs by arranging and composing standard types or systems. [92] Both Flatwriter’s optimisation and WikiHouse’s “LEGO-like system” are pursuing design in accordance with patterns, and discouraging chaotic results. 

Figure 10 Designing Process for a WikiHouse [93

Nevertheless, neither Flatwriter nor WikiHouse has achieved what is supposed to be an automatic process of using parametric models to generate a variety of designs. For Flatwriter, the last mile of automation could be ascribed to the unavailability of computers capable of performing calculations or processing images. For WikiHouse, the project has not yet fulfilled its promise of developing algorithms for design rules that resemble how the “LEGO blocks” are organised. Specifically, in the current stage, plans, components and structures of WikiHouse are designed in SketchUp by hand. [94] The flexibility granted to users is achieved by grouping plywood lumber into components and allowing users to duplicate them (Fig. 11). Admittedly, if users are proficient in Sketchup, they could possibly customise their WikiHouse on demand – but that would then go against the promise of democratising buildings through open-source platforms. [95]  

Figure 11 SketchUp Models of WikiHouse96 

Consequently, the last mile of automation again causes a conundrum of architectural authorship. Firstly, in both cases, never mind “the death of the author”, it appears that there is no author to be identified. One can argue that it signals a democratic spirit, anonymising the once Howard Roark-style architects and substituting them with a “creative common”. Nonetheless, it must be cautioned that such substitution takes time, and during this time, architects are obliged to be involved when automation fails. To democratise buildings is not to end architects’ authorship over architecture, but conceivably, for a long time, to be what Ratti and Claudel called “choral architects”, who are at the intersection of top-down and bottom-up, orchestrating the transition from the information age of scale to the post-information age of collaboration and interactivity. [97] Although projects with similar intentions of generating design and customising housing through parametric models – such as Intelligent City and Nabr – may prove to be more mature in their algorithmic process, architects are still required to coordinate across extensive sectors – clients’ inputs, design automation, prefabrication, logistics, and construction. [98] Architectural authorship in this sense is not definitive but relational, carrying multitudes of meanings and involving multiplicities of agents. [99]  

In addition, it would be inaccurate to claim architectural authorship by the user, even though these projects all prioritise users’ opinions in the design process. By hailing first-order cybernetics while rejecting the second-order, advocating order while disapproving disorder, they risk the erasure of architectural authorship – just as those who play with LEGO do not have authorship over the brand, to extend the metaphor of the “LEGO-like system” in WikiHouse. Especially as the digital turn in terms of technology does not guarantee a cognitive turn in terms of thinking. [100] Assuming that the capitalist characteristics of production will not change, technological advancements are likely to be appropriated by corporate and state power, either by means of monopoly or censorship.  

Figure 12 Non-standard Production After Repositioning Users 

This erasure of human agency should be further elucidated in relation to the suppression of chaos in these systems. As Robin Evans explained, there are two types of methods to address chaos: (1) preventing humans from making chaos by organising humans; and (2) limiting the effects of chaotic environments by organising the system. [101] While Flatwriter and WikiHouse choose to conform according to the former at the expense of diminishing human agency, it is necessary to reinvite observers and chaos as an integral part of the system towards mass-customisation and mass-collaboration (Fig. 12). 

Conclusion 

For Walter Benjamin, “the angel of history” moves into the future with its face turned towards the past, where wreckages were piled upon wreckages. [102] For me, addressing the paradox of “the last mile” in the history of architectural digitalisation is this backward gaze that can possibly provide a different angle to look into the future.  

This article mainly discussed three moments in architectural history when technology failed to live up to the expectation of full automation/digitalisation. Such failure is where “the last mile” lies. I employ “the last mile” as a perspective to scrutinize architectural authorship in these moments of digital revolutions. Before the information age, the Albertian notational system can be regarded as one of the earliest attempts to digitalise architecture. Alberti’s insistence on the identical copying between designers’ drawings and buildings resulted in the divide between architects as intellectuals and artisans as labourers. However, this allographic mode of architectural authorship was not widely accepted even into the late 20th century.  

At the turn of the information age and post-information age, Cedric Price’s Fun Palace was another attempt made by architects to respond to the digital revolution in the post-war era. It was influenced by second-order cybernetics theories that focused on the flow of information and the computational process. Buildings were deemed only as a catalyst, and architectural authorship was shared between architects and users. Yet by examining how the Fun Palace failed in the last mile, I put forward the idea that this authorship should also be attributed to technicians and ghost workers assisting the computation processes behind the stage. 

Finally, I analysed two case studies of open-source architectural platforms established for mass-customisation. By comparing Flatwriter of the cybernetics era and WikiHouse of the post-information age, I cautioned that both systems degrade architectural authorship into emptiness, by excluding users and discouraging acts of chaos. Also, by studying how these systems fail in the last mile, I position architects as “choral architects” who mediate between the information and post-information age. Subsequently, architectural authorship in the age of mass-customisation and mass-collaboration should be regarded as relational, involving actors from multiple positions. 

References

  1. Mary L. Gray and Siddharth Suri, Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass (New York: Houghton Mifflin Harcourt Publishing Company, 2019).
  2. Gray and Suri.
  3. Gray and Suri.
  4. Mario Carpo, The Alphabet and the Algorithm (London: The MIT Press, 2011), p. 22.
  5. Carpo, The Alphabet and the Algorithm, p. 22.
  6. Carpo, The Alphabet and the Algorithm, pp. 22–23.
  7. Mario Carpo, The Second Digital Turn: Design Beyond Intelligence (Cambridge, MA: The MIT Press, 2017), pp. 131, 140.
  8. Antoine Picon, ‘From Authorship to Ownership’, Architectural Design, 86.5 (2016), pp. 39–40.
  9. Picon, ‘From Authorship to Ownership’, pp. 39 & 41.
  10. Philip F. Yuan and Xiang Wang, ‘From Theory to Praxis: Digital Tools and the New Architectural Authorship’, Architectural Design, 88.6 (2018), 94–101 (p. 101) <https://doi.org/10.1002/ad.2371>.
  11. ‘“The Last Mile” An Exciting Play’, New Leader with Which Is Combined the American Appeal, 10.18 (1930), 6; Benjamin B Ferencz, ‘Defining Aggression–The Last Mile’, Columbia Journal of Transnational Law, 12.3 (1973), 430–63; John Osborne, ‘The Last Mile’, The New Republic (Pre-1988) (Washington, 1980), 8–9.
  12. Donald F Burnside, ‘Last-Mile Communications Alternatives’, Networking Management, 1 April 1988, 57-.
  13. Mikko Punakivi, Hannu Yrjölä, and Jan Holmström, ‘Solving the Last Mile Issue: Reception Box or Delivery Box?’, International Journal of Physical Distribution and Logistics Management, 31.6 (2001), 427–39 <https://doi.org/10.1108/09600030110399423>.
  14. Gray and Suri, p. 12.
  15. Gray and Suri, p. 12.
  16. Matteo Pasquinelli and Vladan Joler, ‘The Nooscope Manifested: AI as Instrument of Knowledge Extractivism’, 2020, pp. 1–23 (p. 19) <https://doi.org/10.1007/s00146-020-01097-6>.
  17. Gray and Suri, pp. 12 & 71.
  18. Carpo, The Second Digital Turn: Design Beyond Intelligence, pp. 9, 18 & 68.
  19. Carpo, The Second Digital Turn: Design Beyond Intelligence, pp. 5, 18 & 68.
  20. James Beniger, The Control Revolution: Technological and Economic Origins of the Information Society (London: Harvard University Press, 1986), p. 295.
  21. Hamid R. Ekbia and Bonnie Nardi, Heteromation, and Other Stories of Computing and Capitalism (Cambridge, Massachusetts: The MIT Press, 2017), p. 25.
  22. [1] Ekbia and Nardi, pp. 25-6.
  23. [1] Michael L. Dertouzos, ‘Individualized Automation’, in The Computer Age: A Twenty-Year View, ed. by Michael L. Dertouzos and Joel Moses, 4th edn (Cambridge, Massachusetts: The MIT Press, 1983), p. 52.
  24. Ekbia and Nardi, p. 26.
  25. Antoine Picon, Digital Culture in Architecture : An Introduction for the Design Professions (Basel: Birkhäuser, 2010), p. 16.
  26. Beniger, p. 433.
  27. Picon, Digital Culture in Architecture : An Introduction for the Design Professions, pp. 24–26.
  28. Nicholas Negroponte, Being Digital (New York: Vintage Books, 1995), pp. 11 & 16.
  29. Negroponte, pp. 163–64.
  30. Carpo, The Alphabet and the Algorithm, p. 12.
  31. Carpo, The Alphabet and the Algorithm, pp. 54–55.
  32. Carpo, The Alphabet and the Algorithm, p. 26.
  33. Leon Battista Alberti, On Painting, trans. by Rocco SiniSgalli (Cambridge: Cambridge University Press, 2011), p. 45.
  34. Alberti, On Painting, p. 23.
  35. Leon Battista Alberti, The Ten Books of Architecture (Toronto: Dover Publications, Inc, 1986), p. 1.
  36. Carpo, The Alphabet and the Algorithm, p. 27.
  37. ‘Architectural Intentions from Vitruvius to the Renaissance’ [online] <https://f12arch531project.fil es.wordpress.com/2012/10/xproulx-4.jpg>; ‘Alberti’s Diffinitore’ http://www.thesculptorsfuneral.com /episode-04-alberti-and-de-statua/7zf3hfxtgyps12r9igveuqa788ptgj [accessed 23 April 2021].
  38. Giorgio Vasari, The Lives of the Artists, trans. by Julia Conaway & Peter Bondanella (Oxford: Oxford University Press, 1998), p. 182.
  39. Vasari, p. 181.
  40. Alberti, The Ten Books of Architecture, p. 22.
  41. Alberti, The Ten Books of Architecture, p. 22.
  42. Alberti, The Ten Books of Architecture, p. 22.
  43. Vasari, p. 183.
  44. Mary Hollingsworth, ‘The Architect in Fifteenth-Century Florence’, Art History, 7.4 (1984), 385–410 (p. 396).
  45. Adrian Forty, Words and Buildings: A Vocabulary of Modern Architecture (New York: Thames & Hudson, 2000), p. 138.
  46. Forty, p. 138.
  47. Forty, p. 137; Carpo, The Alphabet and the Algorithm, p. 78.
  48. Picon, Digital Culture in Architecture : An Introduction for the Design Professions, p. 20.
  49. Mario Carpo, ‘Myth of the Digital’, Gta Papers, 2019, 1–16 (p. 3).
  50. N. Katherine Hayles, ‘Cybernetics’, in Critical Terms for Media Stuies, ed. by W.J.T. Mitchell and Mark B.N. Hansen (Chicago and London: The University of Chicago Press, 2010), p. 145.
  51. Hayles, p. 149.
  52. Hayles, pp. 149–50.
  53. Socrates Yiannoudes, Architecture and Adaptation: From Cybernetics to Tangible Computing (New York and London: Taylor & Francis, 2016), p. 11; Hayles, p. 150.
  54. Hayles, p. 150.
  55. Stephen Wolfram, A New Kind of Science (Champaign: Wolfram Media, Inc., 2002), pp. 1, 5 & 14.
  56. Arata Isozaki, ‘Erasing Architecture into the System’, in Re: CP, ed. by Cedric Price and Hans-Ulrich Obrist (Basel: Birkhäuser, 2003), pp. 25–47 (p. 35).
  57. Yiannoudes, p. 29.
  58. Yiannoudes, p. 14.
  59. Stanley Mathews, ‘The Fun Palace as Virtual Architecture: Cedric Price and the Practices of Indeterminacy’, Journal of Architectural Education, 59.3 (2006), 39–48 (p. 43); Yiannoudes, p. 26.
  60. Isozaki, p. 34; Yiannoudes, p. 50.
  61. Stanley Mathews, p. 47.
  62. Cedric Price and Joan Littlewood, ‘The Fun Palace’, The Drama Review, 12.3 (1968), 127–34 (p. 130).
  63. Price and Littlewood, p. 130.
  64. Forty, p. 148.
  65. Jonathan Hill, Actions of Architecture (London: Routledge, 2003), pp. 68–69.
  66. Isozaki, p. 34.
  67. Isozaki, p. 35.
  68. Reyner Banham, Megastructure: Urban Futures of the Recent Past (London: Thames and Hudson, 1972).
  69. Price and Littlewood, p. 133.
  70. Forty, pp. 142-8.
  71. Yiannoudes, p. 29.
  72. Yiannoudes, p. 31.
  73. Gray and Suri, pp. 33–34.
  74. Gray and Suri, p. 34.
  75. Cedric Price, Fun Palace Project (1961-1985), <https://www.cca.qc.ca/en/archives/380477/cedric-price-fonds/396839/projects/399301/fun-palace-project#fa-obj-309847> [accessed 25 April 2021].
  76. Pasquinelli and Joler, p. 19.
  77. Yiannoudes, p. 18; Carpo, ‘Myth of the Digital’, p. 11; Hayles, p. 145.
  78. Mario Carpo, ‘Myth of the Digital’, pp. 11–13.
  79. Carpo, ‘Myth of the Digital’, p. 13.
  80. Mathews, p. 42.
  81. Yiannoudes, p. 33.
  82. Carpo, The Alphabet and the Algorithm, p. 99.
  83. Carpo, The Alphabet and the Algorithm, p. 99.
  84. Yiannoudes, p. 50.
  85. Yiannoudes, p. 30.
  86. Yiannoudes, p. 30.
  87. Yiannoudes, p. 30.
  88. Yiannoudes, p. 31.
  89. Yiannoudes, p. 31.
  90. Alastair Parvin, ‘Architecture (and the Other 99%): Open-Source Architecture and the Design Commons’, Architectural Design: The Architecture of Transgression, 226, 2013, 90–95 (p. 95).
  91. Open Systems Lab, ‘The DfMA Housing Manual’, 2019 <https://docs.google.com/document/d/1OiLXP7QJ2h4wMbdmypQByAi_fso7zWjLSdg8Lf4KvaY/edit#> [accessed 25 April 2021].
  92. Open Systems Lab.
  93. Open Systems Lab.
  94. Carlo Ratti and Matthew Claudel, ‘Open Source Gets Physical: How Digital Collaboration Technologies Became Tangible’, in Open Source Architecture (London: Thames and Hudson, 2015).
  95. Parvin.
  96. ‘An Introduction to WikiHouse Modelling’, dir. by James Hardiman, online film recording, YouTube, 5 June 2014, <https://www.youtube.com/watch?v=qB4rfM6krLc> [accessed 25 April 2021].
  97. Carlo Ratti and Matthew Claudel, ‘Building Harmonies: Toward a Choral Architect’, in Open Source Architecture (London: Thames and Hudson, 2015).
  98. Oliver David Krieg and Oliver Lang, ‘The Future of Wood: Parametric Building Platforms’, Wood Design & Building, 88 (2021), 41–44 (p. 44).
  99. Ratti and Claudel, ‘Building Harmonies: Toward a Choral Architect’.
  100. Carpo, The Second Digital Turn: Design Beyond Intelligence, p. 162.
  101. Robin Evans, ‘Towards “Anarchitecture”’, in Translations From Drawings to Building and Other Essays (从绘图到建筑物的翻译及其他文章), trans. by Liu Dongyang (Beijing: China Architecture & Building Press, 2018), p. 20.
  102. Walter Benjamin, Illuminations: Essays and Reflections (New York: Schocken Books, 2007), p. 12.

Suggest a Tag for this Article
Open Seminar – Round Table
Open Seminar – Round Table
Open Seminar – Round Table Discussion
Algorithmic Form, Discussions & Conversations, Open Seminar, Round Table
alessandro bava, Provides Ng, Marco Vannucci, Philippe Morel, Roberto Bottazzi

thealessandrobava@gmail.com
Add to Issue
Read Article: 3239 Words

(This transcription has been edited) 

Presenters: Alessandro Bava (AB), Philippe Morel (PM), Marco Vannucci (MV), Roberto Bottazzi (RB), Provides Ng (PN).  

Venue: Zoom

Date: 08th December, 2021

AB:  

What I’m interested in, in this discussion – and we saw it in all the presentations – is not exclusively work that has been done with a computer per se or using proficiency in coding, but also how this can influence the practice of designing and making spaces. Going back to architecture; making spaces, constructing the human habitat. 

I think there are a number of strands we could pick up on, so I’m going to leave space for the speakers too, [but] I have a few questions and connections that I want to make. 

I think the video was amazing to end with, Philippe [Philippe Morel], because it also gave us a big platform to understand culturally how all these different things are laid out, because we really dwell in different timeframes – or timelines, one should say, it’s more fashionable today! 

I think there are these amazing overlaps and connections that allow us to expand on this, and I really want to stress our support for our guests. For the people listening, there is not so much work being done in the direction of understanding this cultural impact – I mean, Philippe mentioned quite a few moments and exhibitions that are in fact legendary, precisely because there are so few of them. 

So, there have been a few moments where of course the role of technology and computation has been understood in terms of its cultural implications. The other day, I was at another panel, another symposium, where we discussed algorithms and their impact on culture at large, and there was so much – in my view, being someone who does not consider themselves to be the most literate on the subject – I find that there is a lot of illiteracy that leads to a lot of paranoia, which actually doesn’t work. And this is something that I was surprised to find in Manfred Mohr, in the 60s, the idea that we need to push for literacy, because actually it is a tool that extends our ability, I think, especially for the purposes of architecture.  

Federico, speaking about the work of his studio today, really clarified that in a very direct and visible way; how we can use applications of computation within groups today, on the design side and the management side, and how these two things can be harmonised through technology. It’s an amazing development, and one that you know Manfred Mohr would be happy about, let’s say, as far as literacy on the subject goes. So, I’m very happy that today we are collectively contributing to this, adding to this history. 

I keep saying lately that we need new hermeneutic tools; tools for understanding computational design and computational tools and how they can be integratedinto established methodologies. How do we integrate new tools into existing methodologies? For example, in the work I did, I was really interested in seeing Moretti’s exhibition at the Triennale where he actually proposed a few buildings. Analysing that exhibition alone, we can see how certain parametric tools were used for specific typologies of buildings. Moretti could have applied this to anything, but he chose to apply it to a certain large-scale urban infrastructure, such as a sports arena, or a cinema – things that we understand as “large objects”. Large single objects that can respond to one main parameter. And actually, towards the end of your presentation, Marco, you said we “could not compute” – we need to understand the scale of algorithms and how far they can go, where they can be applied to architecture in a meaningful way and where perhaps not at all!.  

MV: 

I think, yes, in retrospect, Moretti focused on typologies that, if we fast-forward 50 years, are typically parametric now, they are more or less mono-functional. Nowadays, a stadium is no longer mono-functional, but it is actually designed [so that spectators are all] looking at the pitch, and therefore we developed it into the most parametric typology. I’m not sure how aware he was of that actually, also because I think at the time the stadium itself was a rather new typology, in a way. Sport, and the “massification” of sport, and so on.  

The other thing I want to say, regarding the discussion – and I’ll just throw it in there perhaps – is that we take it for granted, that for many, many years, computational design, especially from the early 90s, has never really confronted the past. As if it was developed in a vacuum, let’s say, as if it just came out of nowhere. Of course, this is understandable, because architects were all very excited; they wanted to kind of experiment and bring this new technology to fruition to start building. The economy at the time was better than today, so there were things that were converging, let’s say, but what I find particularly important is that at some point, eventually, it is actually, really necessary to go back and see that there is a legacy there. There is a tradition; which is a very normal, traditional architecture, as we know it, and it’s not just a bunch of punks that play with computers. In terms of the cultural relevance of the discussion. 

And then, of course, we can say that we have always been parametric, or that architecture itself is a discipline that is about the idea of establishing algorithmic procedure to get something built. 

AB: 

I think the knowledge that we should perhaps understand, and I think Manfred Mohr’s work really helps us with that, is that it’s perhaps just the idea of encoding certain processes that have always been part of architecture. Coding them, and then potentially automating them or doing something else with them, is what machines allow us to do, but that doesn’t necessarily change how we think about it; it’s not the end.  

I want to stress the fact of what you say about the importance of history, or how we are trying to reconnect – or rebuild bridges, if you like. For some parts of the discourse on digital computation, perhaps it’s as if history started in the Bell laboratories, or something like that? It started in the US with the beginning of mass computers and stuff like that. But I think, Roberto, of course, has done a lot of work on building bridges,  and making us understand that the bridges go a lot further back in time, in fact.  

RB: 

I keep thinking about what Philippe said a second ago, and why computational logic keeps going metaphysical, and I think it’s a side note, but I can’t stop myself, I have to say it! 

There are two ways to look at it, one is that you’re totally right Philippe, [Ramon] Llull is the point of reference in this conversation, and again, if we’re talking about bridges that were burned in history there’s definitely only a vague understanding of the importance of Llullism. How could it be that a person who invents concentric wheels, who wants to basically convince Muslims that their religion is inferior, has a lasting effect throughout Europe for over 300 years? I mean, it’s not even explainable as a joke! I would say this is perhaps interesting – because it is a computation project, there is no doubt about that – it’s very interesting because computation sits at a moment in history where other notations emerge for non-visual, or non-mimetic ways of articulation, articulating reality and knowledge. That was interesting for Philippe – but his is just the last presentation we saw and I tend to have a short term memory! 

It was also interesting, for instance, for Manfred Mohr, this constant tension between the visual and the conceptual – and I think that is one of the interesting premises of computation, historically, over a very long period of time. A system to articulate something that lies between the intelligible and the sensible. Something that cannot quite be sense, and yet needs to be very clear to the mind. This tension, the fact that computational logic always tends to be in that realm, is probably something that has to do with that. 

Obviously, you could also look at it a different way, you could say, well, computational logic is a simple mathematical process that could be grasped a lot earlier in history than other, more advanced, mathematical models; or you could also relate it to the fact that, for some reason, the Christian tradition forgot the first commandment, because we should not really be able to draw God. But we decided to ignore it, for reasons that are not entirely clear to me, and the kabbalistic tradition did not ignore it, the kabbalistic tradition is a notational system for symbolic articulation of the world without generated images. So, I think all I want to say is that the short comment that Philippe made in passing could be quite powerful. 

AB:  

I love that this took a theological dimension! I think it’s really crucial; this constant question on this idea of the visual and the conceptual, even in the work of Manfred Mohr – when you talked about this period when his work was purely code and, in fact, in the exhibition, there was a printing machine just printing whatever was coming out of the program. Then later on, in the 80s, with the development of the visual interfaces, his work became different – and in fact you connected it to the work of Peter Eisenman.  

So, it’s really a key question for me that today, of course, software is popularised, there is even visual computation, visual algorithms… this is possible through software such as Grasshopper. There are aids to an understanding of a visual means through code, let’s say, and I’m interested in this, because for a long time we have been discussing computation and architecture purely in terms of data – how do we get data, how do we structure data? But today, we’re in a different environment, where software is more developed and more accessible, and people don’t necessarily think about “what’s in the black box”; but nevertheless, what comes out for me, when I look at it again, I can only understand as computational. Even more so when it’s informed by the language and culture of the digital – by the culture of digital tools.  

I’m really curious to hear your position on this, whether you see where we are going in a sense? Is visual computation comparable to a purely algebraic or coded computation? Can we compare the two, can the two coexist? Philippe, I would love to hear your answer, but this question is extended to everyone. I think it touches everyone, pretty much.  

PM: 

I mean, first, just a very quick note on this metaphysical issue associated with combinatory rates. My feeling is that at that moment in time – you know, in the 13th Century, or 12th Century – it was a bit extraordinary to be able to demonstrate that only a few numbers or parameters could lead to so many possibilities. So, I think there are some magic tricks for the people who know nothing about mathematics, there’s some magic associated with combinatorics – at least at that time in history. Of course, today we look at that as something which is pretty simple; we are not surprised anymore by anything to do with combinatorics and we are probably more impressed by some other domains of mathematics that are more conceptual but, I would say even in the 20th century it was impressive, there was some magic to it. My feeling is that if it’s a bit associated with metaphysics, it’s also because there is some intrinsic magic in this combinatorial explosion at some point. It’s a very sketchy hypothesis! 

Regarding the question by Alessandro; no, I believe that visual programming is not like more standard programming where we use code and symbols. It creates the same effect, but I would say probably the intellectual operations are not exactly the same – also the feeling we have is not exactly the same because, in one case we do things – it’s a much more visual operation. When you do visual programming it’s a bit like putting some order in a PowerPoint presentation, you shift some slides until it’s made, but when you do programming by writing code I think it’s a slightly more analytical approach, or it’s more textural, more text based. 

AB:  

I agree. Then my question is to the end of making architecture – as of course I understand what you are saying, the two things are very different – but to the end of making architecture, toward the end of what is useful for architecture? Because if I look, for example, at someone like Federico, they use computational tools, but the input is very much like a curve that is drawn, and they use this data to then do different kinds of processes. That one curve can start influencing other curves that are drawn and things like that, but there is an input that is drawn. Whereas in a lot of computation, for the description of the visual design, there is always this question – even in the academic work at the Bartlett – of where does the data come from, and it’s almost like a theological question; it has to come from some God-given numerical formula. So I’m interested in this question, which, I think, is quite a central question, methodologically. 

PM:  

I would say, probably, we are entering an era in which the data is becoming more important than the algorithms. I don’t know if it’s true scientifically speaking, by the way, but at least the mindset is maybe in favour of a deeper influence of the data, over the influence of the algorithms, maybe – but again it’s definitely not a scientific statement. Probably because it’s much easier to associate the data to everything which is happening in society at large. 

For example, we know the data of Facebook, because we see them every day. Although we don’t see all of the data, we see how it works; but we don’t know the algorithms they are using. So, even if I believe that algorithmic science is more developed and more advanced than ever – it’s absolutely crazy the complexity of algorithmic science today – most people don’t have a grasp on that. So this is why, maybe, we can say that on an everyday basis the data seems more important in today’s society. 

AB:  

I agree with that. Perhaps it’s also because certain algorithmic blocks are more available. I can bring the example of my students last year: they would take existing machine-learning procedures, then completely change the data set to an architectural data set, for example on architectural typologies, and then they would tweak the machine learning “black box” to adjust the output to what they needed it to do. So, in a way, this is a different approach. I mean, scientifically it is not a purist approach to computation, but ultimately, at least what I’m interested in is, how can we use it, even if it’s about using blocks and bits, how can we then tweak them to be useful for us as designers? That is my point to you. 

Are there any more comments, or questions from the audience? We had a pretty amazing rate of people not dropping out.  

PN: 

Actually, when you were asking the question about visual computation versus algebraic code computation, I wasn’t exactly sure why it was asked us a question. Maybe it’s because it’s 1 am, but when you were asking, it actually reminded me of John Nash, the guy who got the Nobel Prize for game theory. When he was 25, before he developed mental illness, he was actually famous for the “embedding theorem”, looking at high dimensional objects and whether you can actually embed them in any Euclidean space. We usually visualize this sort of embedding like a donut, with a lot of waves flowing through the donut, but actually when they interviewed John Nash everything in his brain was numbers; he was never really a visual person.– He completely hated the movie A Beautiful Mind [a biopic of John Nash] because he didn’t see things [in the way it portrayed], like his schizophrenia was a miracle – I mean that’s crazy to a very banal brain like mine. 

I don’t really see the visual and the algebraic as either/or – and also, if you look at Chinese mathematics, as Philip also showed, the entire book of change, the I Ching with the hexagrams, was not visual. They literally document everything with Chinese characters – and it’s crazy when you have to read through that, because China is an agricultural nation, so we measure everything pragmatically. The mathematics is metaphysical, but we’re measuring the depth of the soil, how much rain we need, in the book of I Ching, and they would write down “12345” in those complex characters and people would still manage to do the geometrical calculation in their mind, which is crazy. 

When talking about Facebook data, there is always this privacy/ethical question that I agree is becoming theological and inescapable – but maybe it’s just because of the mindset that we feel like we’re always dependent on a centralised platform. We’re actually making a sort of trade, where we surrender the data because they’re doing a social service for us. A computational service that would be hard to do as an individual. So maybe the mindset is, as opposed to passively surrendering data, is there a way to actively contribute data so that we get over the data privacy problem? 

AB: 

I was thinking about how, for example, architecture data is scarce. When we did this research on technologies, it was really hard to find this data. Where do you go? You need to go into the old registries of each city to find the undigitised maps, and try to redraw them and things like that, so we also live in that reality.  

Also when you mentioned the abstraction versus visual idea, I was reminded of my dad, who in his career was a computer programmer, and how he always says that he sees the numbers and not the visual things, so for me this is slightly triggering on some levels!  

Anyway, any more comments or questions? 

PN: 

It’s actually like CAPTCHA, right, what they really do is that they don’t hire an intern to label a data set, but instead they create an economy by distributing the labeling tasks to users, match-making two problems – problems in training machine vision and in validating humans – [to create a solution].  

AB: 

Yeah, we’re waiting for a start-up to deal with the architectural algorithm!  

Provides Ng 

(Laughs) [Get people to label] doors and windows for BIM? 

AB: 

Exactly. That perhaps is a good implementation.  

All right, I’m thinking that I will close this amazing session here today, just because, again, we were meant to finish at five! 

I’m really grateful to all of you for your contributions, and again, today was a kind of amazing and stellar way to present the journal that will come out next year. So thank you so much for this discussion. It’s really precious, for me a lot of ideas were really fruitful in amplifying the conversation on computational design. As we have seen, augmenting the literacy and the discourse and the different threads on it, and even the historical grounding of this discourse, is fundamental.  

So, thank you so much. 

Suggest a Tag for this Article
Figure 1 - Sea of Digital Models @FONDAMENTA
Figure 1 – Sea of Digital Models @FONDAMENTA
Fondamenta
architectural language, BIM, Building Information Modelling, construction, Fondamenta, Generalist Architect
Office Fondamenta

mail@fondamenta.archi
Add to Issue
Read Article: 2380 Words

The following piece is transcribed from Fondamenta’s talk at the B-pro Open Seminar that took place at the Bartlett School of Architecture on the 8th December, 2021.

Figure 1 – Sea of Digital Models, FONDAMENTA

We are interested in the construction of spaces with a strong belief in research and experimentation, where building is the end to which architecture must strive to become itself, and technology is the tool used to reach this result. We question conventions and support contradictions; fascination for structure, and freedom from dogma are the premises of this research. Structure is the trace of space, it organises the program and generates the building. Governance through technology is the key to the creation of an architectural organism, we see our projects as opportunities to conduct research on structural systems and the use of materials. We push materials to and against their limits – we are into designing through a systematic approach, relative to structures, without forgetting that the ultimate user of this organism is the human being; we are glad to have seen four very interesting presentations. We connect with the work of Luigi Moretti a lot, who we deeply admire as an architect, being one of the first pioneers in understanding spaces as organisms, creating them with a scientific logic and having developed four precise categories to design them.

What is technology for us? It is an instrument that we face daily, we use technology to follow our purpose, and to reaffirm the central role of the Architect in the building process. Technology drives efficiency, precision and control through the entire process, allowing governance of the economy of the project. The central issue of the use of technology is always about WHO is responsible for the governance of it. We believe the answer is that the Architect should be able to take this role.

Figure 2 – Scheme showing the impact of the technological Governance of the Project, Fondamenta

Today, we don’t want to talk about specific softwares and the use we make of them but rather point out the great opportunity that a specific use of technology could give Architects today. We were trained in a university founded on Vitruvian philosophy in which Architects must have a holistic approach to Architecture, being as much generalist as possible within the field of the discipline. Over time, we have witnessed a dismantling of the so-called “Generalist Architect”, in favour of over-specialisation in specific aspects of our discipline. The Architect has been relegated to a consultant, who concurs in order to create an architectural project. Instead, we believe the Architect must be the central figure, capable of managing the complexities of today’s world, through governance of many actors and aspects. This can only be possible, in our opinion, with the aid of technology. Our last resource is to believe a generalist Architect may still exist. . .

To achieve the latter, we use existing BIM (Building Information Modelling) technology to be superimposed with our customised system. For three years we have been testing a Vocabulary of codes and protocols that are applied to BIM and that become the common “language” inside the digital model that expresses the Architectural Project, which all involved actors have to learn and share. We, as Architects, are responsible for the governance of this centralised model and system, being the one creating the laws of the digitally-organised government. We didn’t start our practice directly with this idea, it was raised as a consequence of the first project we built and the impossibility we faced to have a central role in the process. Losing power and responsibility over the process with a negative impact on the projects was the consequence. We are still working on it daily to improve it, it is an ongoing process. If we have to depict with a diagram the shift between the approach we had at the beginning, and the approach we have now, this slide expresses it [indicates screen].

The centralised system we are looking for allows different actors to interact inside a given structure, with a given language crafted by us.

Figure 3 – FONDAMENTA BIM Alphabet, Fondamenta

To get more into details, the above charts depict specific aspects of our customised Mother model. The strength of BIM is that it enables all consultants who are involved in the process to implement and add their knowledge and information inside a common, single-instance digital Model. Codes and rules were developed so as to share and communicate between the different disciplines, which belong to different worlds. The most important layer to be translated is that of economy. Each aspect of the project relates to an economical parameter that controls the cost of the projects. Starting from an existing software, we added our customised logic and vocabulary.

What we are seeing throughout our practice is that we can have control of the project from the very start. For the most part, BIM generally arises after an execution plan is in place. Instead, we deal with these premises from day zero – from concept phase – this is what makes enormous difference. Following this scheme, all actors begin to communicate at the very start, at the right time, without finding themselves in the position of compromise, but rather putting on the table all the topics that, if worked out at the right time, can surely bring the project to more radical expressions. Hence, there are incredible possibilities to push the projects to their limit, being able to build without it being jeopardised during an uncontrolled process.

We will show three different projects of ours. The first one, our first built project, is a winery in Piemonte (2018-2020).

Figure 5 – Winery Cantina dei 5 Sogni, Extract from Casabella 921 @Marco Cappelletti 
Figure 7 – Winery Cantina dei 5 Sogni, Executive drawings for Steel formwork and concrete shells geometry, FONDAMENTA and Matteo Clerici 

In this project, our awareness of technology and its potential was limited and not yet evident. That is why we run this project without using BIM to solve design and governance issues. The winery project develops research on the pursuit of a seemingly impossible balance between different structural systems, which must coexist as one organism out of concrete and steel. We designed and optimised the shell system together with our engineer, making it work as structural truss to hold the concrete pitched roof while containing part of the program. The double steel formwork of the shells, poured in one single day without pause, was directly designed, drawn and sent to the manufacturer.

After this experience, we realised that we needed more technological support to be able to control the construction process in order to push forward more projects. Particularly dealing with aspects such as economics, time and money, but also sustainability of the process. This change of guard started with the series of projects we are building in Sicily, first among all 18018EH projects of houses near Noto. From this moment, we started governing the process with the aid of BIM – our instrument – from the beginning of conception.

Figure 8 – 18018EHSR Private House, External Rendering, DIMA 

This house  is mostly underground, with only 30% of its surface exposed above ground meterage. We are trying to develop a three-dimensional project where the space develops in three axes, and all the load-bearing walls are made of local stone. The structural floor plan is created through a system of radius and circumference. Through the use of softwares, we were able to optimise the construction lines, turning them from splines to radius, working in accordance with the technical consultants to develop the BIM model. This is a snapshot showing the massive amount of information inside this model.

This is interesting because implementing information in a model is not enough to control it, there needs to be instrumental rules in order to make an architecture real. This project will be soon delivered to a construction company. Costs, money and time are essential points in our profession, in order to have the possibilities to realise our research, design cannot transcend from them. We are connected and interested in the economy of the project, which sustains architecture processes through awareness in governance and allows us to control our design according to cost.

Figure 10 – 18018EHSR Private House, Axonometry showing construction aspect and codes, FONDAMENTA 

It was incredible how we managed to control the project and design through our tools. For example, we like to show all these axonometric drawings – each code, of course, remains connected, with a clear Excel chart that reminds us of cost, quantities, and all the details that a specific part of the model has. Figuring out a way of communicating the mass of information that we were implementing in the digital model was another interesting aspect. This is something that we’re still developing to make it even more readable for the involved actors. Of course, there are just a couple of Excel spreadsheets connected to these axonometries!

Figure 11 – 18018EHSR Private House, Axonometry showing stone walls geometry and codes, FONDAMENTA 

In terms of design, we see the potential in technology as something that allows us to further push our research related to space and structure. For example, here, all the other walls will be made out of stone, blocks of stone that are one metre long, 50 centimetres high, and 30 centimetres in depth. For Grasshopper, we customised each one to come out with a sort of “abacus” of all the walls with specifications and a numbering system, then, delivered to a construction company.

This technology enables us to build within a certain amount of time. If we reflect on past projects, time is something that we really cannot negotiate – it is the hardest variable to negotiate today. Technology gives us the ability to control time more than any other aspect. We love to go back to the models, because we think that this “ping-pong” between the digital tool and the making process gives us an awareness of reality. We don’t have to lose control of what we are thinking and designing.

Figure 12 – 20027F Private House Rennovation, Axonometry showing the project strategy, FONDAMENTA 

The last aspect that we are trying to show through this house – one of the projects already into construction since four months ago – is that we reached a certain level of governance of actors during the process from the beginning. This is a renovation, where we stripped out the existing building – the partition walls – but kept working with the existing concrete cage structure. We kept the load-bearing structure, made out of concrete, and we inserted a new steel structure, changing its form but keeping the volume untouched.

Wanting it to be a precise case study, we sat with our consultants and engineers from the very beginning. All the possible actors were involved from the embryonic phase and we designed together, trying to understand immediately all the potential realistic approaches that could be achieved.

Figure 13 – 20027F Private House Rennovation, Axonometry of the BIM Model, FONDAMENTA 
Figure 14 – 20027F Private House Rennovation, Rendering, DIMA 

I’ll just show a couple of snapshots of the model that we delivered to the construction company, pointing out that it is the same model we had from the beginning. From structures, H back, to installations, every element was designed with involved actors, long before the building process started on site.

It’s really important for us to underline that Architects have to be able to see and understand consultants and potential constraints as a possibility to further the design. For us, this was not something particularly easy to understand initially, because we were trained to see consultants and all other actors as part of architecture, and came in parallel to the project. Just like the scheme we showed, they are parallel lines that, at a certain point, intertwine. In that moment, you have a connection, and this connection has to be constant. Through this system we are developing, where each actor involved in the process has to be aware of the language we share in order to achieve the project.

This is just a snapshot of the house at the moment; we’ve stripped out the partition walls and it’s just the concrete.

To conclude, BIM has a deep social impact, giving back to architecture and architects the power they should have in the process. It is then up to us to create a social resistance and approaches to contemporary society.

Suggest a Tag for this Article
Structures, Voids, and Nodes: Leonardo and Laura Mosso’s “Architecttura Programmata” 
Architecttura Programmata, Laura Mosso, Leonardo Mosso, Nodes, structures, Voids
Roberto Bottazzi

roberto.bottazzi@ucl.ac.uk
Add to Issue
Read Article: 6944 Words

Introduction 

The work of Leonardo and Laura Mosso provides a very early and original application of computation to architectural, urban, and territorial design. Although computers were actually utilised to develop their ideas (a rare event in the 1960s in Italy), the work possessed conceptual and political ambitions that exceeded both the simple (or even fetishistic) fascination for a new technology and the functional approach that conceives computers as tools to efficiently complete tasks. Rather, the computer was part of a proto-ecological approach in which artificial and natural elements worked together towards the emancipation of the individual and their environment. At the centre of their research was “Architettura Programmata”, defined as a “theory of structural design” dedicated to the design of elements, their connections, as well as a higher, meta-system which we could call “structure” in the sense that Structuralism defined this word. Computers were involved in this project under both a design and an ethical agenda to understand and define “ecocybernetic dynamic as a structure for a self-evolved language of the environment and of the form at various levels of complexity, inserted in an unforeseen chain of self-evolved cybernetics: from political cybernetic to cybernetic of information, as integrated instruments of evolution in a condition of direct articulated democracy”.[1] 

This paper will discuss how computational thinking and computers were employed in the work and research of Leonardo and Laura Mosso, by analysing three paradigmatic projects which tackled the notion of structural design at different scales and contexts. The first project will be Cittá Programmata (1967-70), a theoretical proposal for a new type of city. The project represents the first actual use of computers in the work of Leonardo and Laura Mosso. The second example will concentrate on a piece of research on Piedmont territory – the place in which they operated throughout their academic and professional careers. Although computers were not directly employed to carry this research out, the approach to territorial analysis and planning employs a form of algorithmic thinking which impacts on both how the territory is read and how it could be re-imagined. Finally, the proposal for the restoration of block S.Ottavio in the historical centre of Turin shows a very innovative use of computers to intervene on historical artefacts of relevant cultural value, as well as the possibility to use computers to manage the future life of a building. 

Structuralism played an important part in the work of Laura and Leonardo Mosso, and it is an essential element in understanding their conceptualisation of structures and the role that design and computation had in it. A slightly left-field but very fruitful interpretation of Structuralism was produced by Gilles Deleuze in 1967, at the time Leonardo and Laura were intensifying their interest in computers.[2] Deleuze emphasised the role of emptiness, more precisely, of the “zero” sign as a mechanism for the transformation and articulation of structures. The notion of empty structure and zero offer a dynamic interpretation of Structuralism that is not only relevant to computational thinking, but can also clarify how the structures designed by Mosso can be understood as dynamic and adaptive.  

Early Experiments with Computers in 1960s Italy 

Before delving into the actual discussion, it will be useful to quickly sketch out some of the cultural trends operating in Italy in the 1960s to better contextualise how Leonardo and Laura Mosso arrived at their “Architettura Programmata”. 

“Architettura Programmata” directly refers to the exhibition “Arte programmata. Arte cinetica. Opere moltiplicate. Opera aperta” organised by Olivetti in 1962. The show was curated by Bruno Munari and Giorgio Soavi, with an accompanying catalogue edited by Umberto Eco. It displayed works by a series of artists, including Enzo Mari, who generated art procedurally, opening up a different mode of production and reception of works of art, also inspired by Eco’s Open Work.[3] In the same period, Nanni Balestrini was also experimenting with computers to generate poems.[4] These two examples are perhaps useful in helping to focus on some lesser-known aspects of Italian post-war culture, which is often mentioned for the work in cinema, architecture, art, but rarely for computation or scientific work in general. Along these lines, it is also worth mentioning the cybernetic group operating in Naples under the guidance of Prof. Eduardo Renato Caianiello, who maintained regular contact with MIT and Norbert Wiener. It is in this more international and open environment that we should position the research of Leonardo and Laura Mosso. 

Leonardo studied architecture in Turin, a very active city that led the Italian post-war economic boom, thanks to the presence of Fiat, the car manufacturer and one of the largest Italian factories. After graduating, Leonardo won a scholarship to study in Finland where, eventually, he started working in Alvar Aalto’s studio around 1958. From then on, he became the point of reference for most of the works that Aalto designed for Italy—such as the design of a residence for the Agnelli family (the owners of Fiat) and the Ferrero factory. A more international profile also characterised the figure of Guiseppe Ciribini, with whom both Laura and Leonardo also collaborated. Ciribini concentrated on the modernisation of the construction industry, focusing on prefabrication and modular design. His work was not limited to Italy and expanded to a European scale through his involvement with the European Coal and Steel Community (ECSC, or CECA in Italian, the precursor of the European Union) to devise international standards for prefabrication. Leonardo and Laura Mosso also established connections with Konrad Wachsmann, incidentally Giuseppe Ciribini’s predecessor at the Ulm School of Design, invited by Tomas Maldonado in 1958. Finally, Leonardo and Laura Mosso were also involved in the early experiments with computer art (which had developed in Croatia since the early 1960s) through the magazine New Tendencies.[5]  

In all these experiences, computation played an increasingly central role. In the case of Ballestrini or for the scientific research developed in Naples, computers were actually utilised, but in other cases, the work only consisted in speculation over what tasks and possibilities could be performed and unleashed. Leonardo and Laura Mosso are among the small group of architects and artists who did make use of computers in their work. With the help of Piero Sergio Rossatto and Arcangelo Compostella, two projects utilised computers to simulate and manage their transformations. Throughout almost two decades of using computers in their work, Leonardo and Laura Mosso developed an approach that was never guided by technocratic notions of efficiency. Rather, the philosophical implications of computing architecture and the political role that information and computation could have brought to a project and society in general constituted their main interest in this new technology. The computer as used in the Mossos’ work was in fact at the service of larger cultural project that aimed at distributing, rather than concentrating, power. Computers were an instrument for change, whereas the values of efficiency and sheer industrialisation appeared to be ways to fundamentally preserve the status quo, by simply making it run more smoothly. Rather than improving how architecture could better fulfil its role under the tenets of a capitalist, industrialised economy, Leonardo and Laura wanted to change the rules of the game itself; the computer, therefore, had to play an almost moral role in radically overturning the mechanisms regulating architecture and its use.  

Central to their research was the close relationship between philosophical ideas (Structuralism), design language (which particularly concentrated on discrete elements connected through reconfigurable, dynamic nodes), and computation. Leonardo and Laura Mosso’s approach to Structuralism was already open to dynamic, cybernetic influences and, for this reason, it may be interesting to read it against the famous writing that Gilles Deleuze dedicated to the same philosophical movement. 

The Dynamics of Structural Form 

Culturally, the post-war years were characterised by the diffusion, particularly in Italy and France, of Structuralism; generally understood as a philosophy of structures rather than functions. Structures could be organised in more general systems – of which natural language represented the most complex, paradigmatic example. Linguistics was indeed the domain of Structuralism, and the source from which most of its fundamental ideas were derived. From Saussure’s Course – indicated as the first structuralist text – to Barthes, Eco, Levi-Strauss, the Bourbaki group, Althusser, and also Foucault and Lacan, structuralist thinking extended beyond the linguistic domain to provide a framework to re-conceptualise other disciplines such as anthropology, psychoanalysis, mathematics, history, or politics. 

Broadly speaking, the definition of a structure consisted of two steps: the determination of its constituent parts (taxonomy) and the definition of the mechanisms that would govern the relations between parts and their transformation (grammar). Critics of Structuralism often rebuked this particular approach to structures for its excessive formalisation and the strictness of its deductive logic. Such criticism tended to depict structuralism as a mechanical, overly linear theory of systems, resulting from the perhaps excessive importance attributed to linguistics. Perhaps such characterisation of structuralism paid too little attention to the more transformative aspects of the theory: the dynamics of change and transformation. These are present in all the major structuralist thinkers; however, Gilles Deleuze provided an original overview that concentrated on the open, topological, and playful aspects of structures which is useful to briefly summarise here. In Deleuze’s “How Do We Recognize Structuralism?”,[6] originally written in 1967, Structuralism was detectable through six different criteria: symbol, local/positional, differential/singular, differentiation/differentiator, serial, and the empty square. Throughout the analysis, the emphasis is on transformation rather than permanence, on the mechanisms that guarantee a structure can operate by straddling between the real and the imaginary in order to transform reality and be transformed by it. 

We will return to Deleuze, particularly his understanding of the notion of “zero” which offers an interesting frame in which to conceptualise the role that structures played in the work of Leonardo and Laura Mosso – and, particularly, how physical construction nodes were instrumentalised to attain a structural language able to change and be appropriated (or “spoken”) by its users. Before dwelling further on this aspect of their work, it is important to point out that the work of Jean Piaget – an author often quoted in Leonardo’s and Laura’s writings – also offered a dynamic reading of structures and Structuralism in general. Laura and Leonardo often made use of Piaget’s characterisation of structures being composed of three main characteristics: wholeness, transformation, and self-regulation.[7] In Piaget’s work, we also we find an open, interactive, “proto-cybernetic”[8] reading of Structuralism marked by a relational understanding of the connections between environment, cognition, and symbols. Particularly, the notion of assimilation outlined by Piaget in The Construction of Reality in the Child[9] outlined a cognitive model based on continuous feedback between reality and the child’s development – an image that brought Structuralism much closer to cybernetics. An eco-cybernetic approach to planning was often also advocated by Laura and Leonardo. These initial definitions are helpful, not only in framing the work of the Mossos in relation to the cultural milieu in which they operated, but also in understanding how computation was conceptualised in their projects to translate notions of structure, node, transformation. 

As mentioned, Deleuze’s survey offers a particular vantage point to understand how Structuralism dealt with change and transformation, and how this can help to frame the role that structures and nodes have in the research of Laura and Leonardo Mosso. Deleuze dedicates particular attention to the notion of the “zero” sign in Structuralism; the “zero” sign is understood as an empty place in structure, determined positionally rather than semantically, that allows transformations to occur. The empty place in a structure guarantees the possibility of its transformation in a way which is analogous to the role of empty squares on a chess board. The structure is understood as a symbolic object. Symbols are here understood according to the definition provided by C.S. Pierce’s semiotics; that is, structures have an arbitrary character that does not attempt to find the essence of the object of investigation, but rather to construct it. In Deleuze’s words: “[the structure does not have] anything to do with an essence: it is more a combinatory formula [une combinatoire] supporting formal elements which by themselves have neither form, nor signification, nor representation, nor content, nor given empirical reality, nor hypothetical functional model, nor intelligibility behind appearances”.[10] The structure is always a third, encompassing element, beyond the real and the imaginary, that allows the structure “to circulate”. In other words, the elements of a structure can only be determined relationally, as “[they] have neither extrinsic designation, nor intrinsic signification”.[11] As the order of the structure is more important that its meaning, not only is the space (or spatium, as Deleuze refers to it) a central medium for the articulation of relations and transformations, but is best described topologically, in the sense that the function of such a spatium is to logically order elements so that specific, empirical objects can occupy the different squares of the structure. The final element to note in Deleuze’s analysis is the “wholly paradoxical object or element”,[12] that is, the connective element that allows different structures or series to communicate with and orient each other in order to perform on different levels, beyond the purely symbolic one. Such an element is defined by Deleuze as the “object = x”; the “zero” sign par excellence; the “eminently symbolic” object that injects dynamic qualities into structures and therefore allows them to work.  

Leonardo and Laura Mosso dedicated large parts of their architectural research to the roles that connecting elements, or nodes, had in articulating structures. Such research produced four different types of nodes which informed their work and can be seen at work in the three projects discussed in the second part of this paper. Deleuze’s consideration on structures help us frame the Mossos’ research as well. The node in a structure is the element that allows transformations to occur. Pieces can be detached, substituted, or removed according to the possibilities and constraints set by the node connecting them. There is therefore an analogy between the physical nodes of a structure and the mechanisms of transformations at work in the philosophical concept of structure. Borrowing from Deleuze’s description, the physical node becomes the “object = x”, the “zero” sign; that is, not simply the element that makes change possible, but also the element that is syntactically operative and open in order for meaning to emerge. The analogy between the two manifestations – physical and philosophical – structure is poignant to grasp the Mosso’s work: nodes are often literally “zero” signs, voids as in the case of the particular type of node developed for Cittá Programmata is literally organised around a void, an empty space. By straddling between its physical appearance and its philosophical interpretation, the node acts structurally, that is, beyond its purely empirical presence, the node is a device that orders physical elements logically. . In both accounts of structures, the minimal unit is the phoneme – “the smallest linguistic unit capable of differentiating two words of diverse meaning”[13] – which Leonardo and Laura put at the centre of their approach to structures by speaking of “phonetic” and “programmed structures”. This approach was already visible in the first example of “programmed architecture”, the Chapel for the Mass of the Artist in Turin (1961-63) in which a static node connected together 5cm x 5cm wooden studs to produce a highly varied pattern for the interior of the Chapel. In successive projects, nodes quickly grew in complexity in order to achieve more articulate and varied configurations, as well as allowing the end user and community to be able to adapt them for future uses. Such an architectural agenda demanded a new type of node that began to be articulated as a void, an “empty square” so to speak, around which the various elements aggregate (fig.). The morphology of this new type of node consisted of a virtual cube – a void – whose eight vertexes could be reconfigured around smaller voids, each able to link together four members. None of the members physically intersected (making the implementation of changes easier) and were organised around a series of voids of different sizes. These physical and conceptual voids held some analogies with the “object = x” Deleuze spoke of in regards to Structuralism; the final configuration was dynamic, a sort of system to let the structure circulate, to make transformation possible. In other words, such an approach to structure transformed the spatial model of representations from a strictly geometrical system to a topological one in which relations between objects took precedent over the presupposed semantic qualities.  

It is also along these lines that we can read the introduction of computation into the work of Leonardo and Laura Mosso. The computer became the perfect instrument to both manage the structural logic of the design and give it the political agency the two architects had been seeking through their notion of programmed architecture. The next section will analyse three paradigmatic projects in which the conceptual issues highlighted can be seen at work.  

Cittá Programmata, 1967-70. 

Cittá Programmata is one of the most iconic projects developed by Leonardo and Laura Mosso, a manifesto that encapsulates some of the key aspects of their work; that is, the potential for a structural approach to design to provide an environment for social and political self-determination. To implement their agenda of political and spatial self-determination, Leonardo and Laura introduced the computer, which represents the other radical aspect of this project. The computer played both an operational and a moral role in enabling the appropriation and transformation of the users’ habitat. Strictly speaking, the project consisted of a series of physical models and computer-generated drawings for an entire city and its possible transformations. The city was structured through a series of cubical modules (or “voxels”) of 6m x 6m x 0.5m that could co-evolve with the life of the city and its inhabitants, resulting (as the models and drawings showed) in an interrupted field of variously extruded elements, each composed by structural elements variously transformed.  

The research for Cittá Programmata took place in a rich cultural environment in which the work of Laura and Leonardo stood out for its original take on some of the topics that animated the architectural debate of the time. As mentioned, the post-war Italian scene was characterised by a growing importance of Structuralism in all aspects of culture. On the one hand, Structuralism guided the introduction of linguistics and semiotics as a general field of study, as well as their application to architectural and urban analysis. This line of inquiry sought to detect the underlying principles of architectural form in itself and in its relation with its context. At the other hand of the spectrum, a more pragmatic understanding of structural thinking was animating the debate on pre-fabrication and modular design, to renew the construction industry and fulfil the demand to modernise the Italian landscape. It is between these two main interpretations of the notion of structure in architecture that Cittá Programmata can be understood, as it proposes a different conception of language and structures. 

Leonardo and Laura Mosso saw in the semiotic approach to architecture an excessive interest in meaning, both in its relation to the internal history of architecture and in context. Against the backdrop of semantic studies on architecture, Cittá Programmata proposed a more structural approach to language and its formalisation; a “phonological” system that would enable its users to ‘speak’ their collective mind through the groups of structures the architects provided. Pre-fabrication, on the other hand, was indeed a rich field of investigation – as mentioned, Leonardo and Laura Mosso had been in close contact with Giuseppe Ciribini. However, prefabrication was committed to a model of society that privileged economic values (through the minimisation of costs, for instance) over political, cultural and social ones. Indirectly, their critique of pre-fabrication was also a critique of the notion of programme (“programma edilizio”), understood as an excessively functional approach to design. The brief – the document through which to implement a building programme – fixed the use of structures or, at best, described a limited number of activities that a piece of architecture could house over a limited period of time. The formalisation of such an approach to programme usually resulted in a neutral outcome which favoured the design of a generic spatial container which, in principle, could adapt to future needs. Leonardo and Laura critiqued this view of design both on the basis of the vagueness of the mechanisms for programmatic determination (future activities may be impossible to predict in advance) as well as for the generic architectural response. In opposition to it they proposed a structural approach that did offer implementable choices (as opposed to programmatic vagueness) and therefore was not limited to regulating quantitative growth, but could also take into account the qualitative aspects of spatial structures. Finally, programme was also critiqued from a political point of view, as it was identified as the political instrument that guaranteed an asymmetrical distribution of power between users and designers.  

Cittá Programmata imagined an environment in which the relation between users and architects was not hierarchically organised, but rather more radically and horizontally distributed. Here, both the programmatic and semantic critique that animated the Mossos’ approach converged. The aim to generate an environment based on a horizontal distribution of power called into question the role that semiotics could play in designing structures. The analogy proposed is once again with language. As for immaterial notions, language and architecture (understood as body of knowledge) are inherently public, they exceed anyone’s ability to claim ownership of them or control them. Both the linguist and the architect can only play with the systems of signs constituting their disciplines in order to make them public and accessible. Contrary to the semiotic studies of architecture which concentrate on the internal mechanisms and references of architectural language, Leonardo and Laura Mosso proposed a rather more “extroverted” approach interested in opening architecture up and inviting users to participate in the creation of their own environment. The architect was “at the service” of architecture, rather than a custodian of the arcane mechanisms of architectural language. In a way, we can say that the position taken was reminiscent of Saussure’s distinction between langue and parole: whereas semiotic studies in architecture appear to privilege the importance of the langue, in Cittá Programmata, Leonardo and Laura Mosso worked to maintain a dynamic relation between the two terms of Saussurean categorisation: 

Architecture, understood in a traditional sense, cannot be a language; that is, it cannot speak by itself. Similarly, we cannot say that the work of a linguist on language is a language … Architecture is at [the] service of language … in the same sense that a language services the community of speakers when it is spoken; that is, when architecture becomes “a system of transformations” or possibilities, from which it is possible to generate infinite messages. 

Mosso and Mosso[14] 

It is in this context that the computer was introduced, both to support the management of the city and to simulate its future configurations. The actual machine utilised was a Univac 1108 owned by the Politecnico of Milan and programmed by Piero Sergio Rossatto – an engineer and programmer at Olivetti – with Arcangelo Compostella. The stunning drawings generated by the Univac (now part of the Centre Pompidou’s permanent collection) showed the possible growth patterns generated from an arbitrary string of signs placed at the centre of the drawing. Two parallel lines of pre-allocated units (*) and voids (-) constituted the starting input for the simulation, which could either proceed in a sequential growth, on the basis of a probabilistic algorithm, (fig.XX) or randomly (fig.XX). The process of algorithmic growth did not take place in a vacuum, rather constraints could be programmed in making growth sensitive to contextual information. 

Landscape, Structure and History (1980-1986)  

A second type of node Leonardo and Laura Mosso had been working on were a kinetic, self-managed, and elastic universal structures(Strutture autogestibili e complessizzabili a giunto universale elastico). Since the beginning of the 1970s, as part of their research on the use of different types of nodes to articulate transformations in physical structures, they had been testing this particular type of node at different scales and in contexts. The research started with the academic work that Leonardo carried out with his students at the Politecnico in Turin, then through commissons such the “Red Cloud” (Nuvola Rossa), an installation completed in Carignano Palace in 1975 in which these nodes found one of their most convincing and poetically powerful applications. This large piece consisted of a complex structure made up of individual elements connected through elastic joints, which allowed the architects to build an undulating mesh suspended between the visitors and the frescos of the palace. These elastic structures were tested at different scales: for instance, between the end of the 1970s and the beginning of the 1980s, Laura and Leonardo would put their kinetic quality to the test by using them as props accompanying the movement of the bodies of contemporary dancers, both in their work with the Conservatorio G. Verdi in Turin (1978) and the performance staged in Martina Franca (1980). It is, however, the territorial scale which is of particular interest in this discussion, since it highlights an original understanding of how structures can perform algorithmically and because of the unusually large scale of this research.  

Here, particular reference is made to the research carried out between 1980 and 1981 under the broad agenda of “methodological work aiming at devising a system of signs to program both at the level of the territory and the city”.[15] The results of this methodological analysis of territorial structures would also inform a subsequent research project and exhibition titled “Landscape, Structure, and History”,[16] which tested their structural approach to territory on the local landscape of Piedmont, its rural cultures, and their relation with their surroundings, with a view to devising a strategy for preservation. Perhaps it might appear unusual for avant-garde architects to dedicate their research to the rural, historically-layered territory of Piedmont. On the contrary, local forward-thinking architects and engineers had already focused on vernacular architectural expressions in the local countryside: Carlo Mollino extensively studied and recorded examples of Alpine vernacular architecture in Valle D’Aosta, and Giuseppe Ciribini – whose work on the industrialisation of construction has already been mentioned – also paid attention to the spontaneous architecture of Alpine and pre-Alpine territories. Some of these interests in rural and vernacular architecture were gathered together by another Torinese architect, Giuseppe Pagano, in his famous exhibition “Continuity – Modernity”, in 1936, for the 6th Triennale in Milan.  

The Mossos’ research on territorial structures consisted of both drawings and physical models of specific areas of Piedmont (Canavese and Carignanese). The work mapped and recorded the landscape of Piedmont by positioning a series of kinetic structures over a map of the existing territory. The structures consisted of a series of elements connected through elastic, kinetic nodes that allowed each element complete freedom of rotation around each vertex. The final configuration of each structure emerged from the mediation between their internal properties (length of the elements, arrangement, type of nodes) and the cartographic representation of the landscape. The drawings took this relationship to more radical conclusions: the landscape was further abstracted and re-coded through a structural approach which adapted to different contexts. Rather than an image of a superstructure, the re-codification of the landscape through models and drawings struck a complex balance between the algorithmic approach and the context.  

In this particular project, structures are understood as organisation principles rather than physical constructions. Earlier, we spoke of a algorithmic use of structural thinking, a quick definition that requires unpacking. An algorithm is a set of instructions that, once applied to a set of input data, will perform a finite number of operations to return an output. Regardless of the complexity of the operations performed, an algorithm recodes the input data into a new set of data. Chomsky’s generative grammar, for instance, could be seen as a recursive (continuous) series of algorithms that rewrites any given statement of a natural language to produce new linguistic statements. The superimposition of Laura’s and Leornardo’s structures on a map of Piedmont countryside operated in a similar fashion and, therefore, could be interpreted as an algorithmic recoding of the territory. The input data was constituted by the information recorded in the cartographic representations of the landscape, whereas the kinetic structures acted as analogue algorithms that recoded the input data according to the vast (yet finite) number of configurations allowed by their physical characteristics (length and number of members, type of joints). In short, the physical structures deployed rewrote the landscape according to a precise set of rules; more poetically, we can say that the elastic node structure allowed the landscape to speak in the language of the structures superimposed onto it; an image that Laura Mosso also evoked when she wrote about developing methods to “make the structures whistle”.   

Contrary to stricter interpretations of Structuralism, the type of algorithmic approach proposed here was not merely deduced from internal, formal rules (that is, the physical constraints set by the elastic nodes); rather it emerged from a more iterative, open relationship with the context (abstracted through cartographic representations). The results of the process set up were particularly legible in the physical models: the kinetic structures made up of interconnected springs were laid out on the map to return a ‘structural re-reading’ of the landscape. A new, structural image of the territory emerged from the interaction between nodes and territory.  

The research on territories that Laura and Leonardo Mosso completed allows us to make a series of considerations on these algorithmic operations, their formal qualities, and the implications they give rise to. First, through a structural, algorithmic approach to territory, the research rejects distinctions between natural and artificial in favour of a more holistic approach to landscape – and yet, one describable through a set of finite operations. The constraints embodied in the physical structures do not decisively distinguish between artificial and natural, symbolic and productive, and thus support Leonardo and Laura Mosso’s call for the kind of expanded notion of ecology they had been advocating for, both in projects and publications (through, for instance, the publication titled La Nuova Ecologia). The structure is the symbolic device that catalogues and organises the whole of the territory (here understood as superseding dichotomies such as urban/rural, artificial/natural), establishing principles for its preservation and transformation Similarly, algorithmic re-writing provides a diachronic reading of the territory that is re-organised along structural rather than chronological vectors. The different nodes of the elastic structures are positioned on the map to establish connections between artifacts built in different times in order to give rise to new relations between them.  Finally, there is the function performed by the elastic structures as analogue algorithms. We have already seen how an algorithm can be understood as a form of rewriting and transformation of an existing condition (input data). The types of operations performed by an algorithm are always precise (determined by the rules programmed in the algorithm), executed in their entirety (the algorithm goes through all the steps scripted to return an output), and yet partial, as the algorithm can only survey a dataset according to the set of rules that form the algorithm itself. The constraints inbuilt in the elastic kinetic nodes allow them to only perform a vast, but finite set of movements; that is, only a subset of all the signs contained in the maps of Piedmont can be computed by the physical structures-algorithms. In short, an algorithm generates a specific representation of the object it is applied to.  

To better grasp this last point, we can draw an analogy between real objects (such as buildings) and their orthographic representation. For instance, a section through a building can only return a partial image of the object it investigated, and yet how a section is drawn follows precise and rigorous rules that determine what and how the building will be captured in the section. But the section is a sign-object, not a building; it elicits further manipulations by either applying different sets of criteria (e.g., by concentrating on the structural, programmatic, material qualities of the building) or by changing the very parameters that generated it (changing the position of the section plane or the conventions applied). The approach developed to the Piedmont territory by Leonardo and Laura Mosso makes aspects of this landscape intelligible through the production of new signs which, in turn, make it amenable to further manipulations. It is important to notice that all operations performed by Laura and Leonardo are performed on a cartographic representation of the territory; photographs and other cultural aspects of the areas such as the name of places are complementary, rather than primary, information. Cartography is itself a coded, notational (rather than mimetic) representation of the territory. As a medium it therefore lends itself to the operations of re-coding and re-writing, since it is already a semiotic system; on the other hand, it acts as a recipient of the new codification of the landscape generated through a structural reading. 

Finally, the structure-algorithm becomes a marker of change, as the instrument through which modifications, and, in general, any metamorphic transformation of the territory can be foregrounded, read, and made tractable in order to preserve it or alter it. The research developed by Laura and Leornardo Mosso shows that a structural approach through algorithmic thinking should not only be confined to new, pristine domains, but can also offer innovative ways to interpret and intervene in historical contexts. The last project discussed – the proposal for S. Ottavio block in the historical centre of Turin – will further reinforce this point.  

S. Ottavio Block, Turin, 1980 

The commission for a study of the block located in the historic centre of Turin was received in 1978 and became an important, yet entirely forgotten chapter in the story of both Leonardo and Laura Mosso’s production and the integration of digital technologies in architecture. On the one hand, the brief for the project was a rather common one for Italian architects, whose practice often confronted (and still confronts) historical artefacts. Leonardo and Laura, however, saw in this commission an opportunity to advance their research on structures as well as on the use of computational tools. For purposes of simplicity, we can artificially divide the project between the physical proposed interventions and the immaterial, data-driven ones. 

The physical restoration of the block consisted in a series of more traditional interventions to reinforce the old brick walls, as well as the insertion of new levels to convert the existing spaces into inhabitable housing units. The new structures in steel and wood were elegantly laid out at a 45-degree angle, to mark a clear distinction between pre-existing and new elements. The type of node deployed in this instance was also a dynamic one, however, the only permissible movement was to slide along one of the orthogonal directions of the structure. Though the dynamics of nodes were limited (in comparison to the conceptual experiments at territorial scale), they allowed users to alter and self-organise their habitat. By deploying the same type of node at different scales and through different materials (aluminium, wood, and plexiglass), users could appropriate the environment both at the architectural and interior scale.  

Perhaps the most radical proposal of this research was the organisation of the conceptual side of the project. A computerised system was going to be set up to monitor and maintain the block. A proto-digital twin, the system would map all the elements of the project and generate a database in order for both individual users and the municipality to control, repair and maintain the whole block. For the programming of the whole system, Piero Sergio Rossatto – who had worked with Laura and Leonardo for the Citta Programmata – was consulted. The spatial representation of the block in the digital model followed the logic of voxels: a three-dimensional grid of individual cubes that provided a system of coordinates to locate every element of the project, existing or proposed, architectural or infrastructural. In Rossatto’s scheme, the project would be surveyed starting from the ground level (z=0 in the digital model) and gradually moving towards the roof by increasing the z-value in the voxel grid. Every intersection between the voxel grid and an element of the project would be recorded.  

Although the project was not well received by the local administration that could not fully grasp the innovative approach, eventually shying away from a unique opportunity to radically rethink the relation between digital technologies and historical artefacts, the project illustrated a different, complementary fact of Leonardo and Laura Mosso’s approach to algorithmic form. 

As mentioned, the project applied digital technologies to pre-existing architectural artefacts protected by preservation laws. Whereas digital technologies are invariably understood as the instrument to deliver the “new” or the “radically different”, or even to make a tabula rasa of pre-existing notions, this project showed a more nuanced, and yet still radical side of digital technologies, which could coexist with and complement the delicate pattern of a historical city.  

The structural approach, which continuously developed throughout several decades of research, here resulted in an abstract grid – a field of voxels, to be precise – that acted as a monitoring system allowing users to appropriate and control their own habitat. In the course of their research, Leonardo and Laura developed a physical model of the virtual voxel field that did not include any of the physical structures designed. The model possessed a very strong sculptural quality, but, most importantly, also showed the power of the algorithmic approach they had developed. On the one hand (and similarly to the experiments carried out in coding the Piedmont territory), the logic of the structure not only enabled its own transformation, but also determined its aesthetic qualities. The algorithmic logic guiding its own re-writing (in this case represented by the rhythm of the voxel field) returned a new type of form; an algorithmic form. As the model clearly showed, the logic of the voxel field implied a space without discontinuities or interruptions; saturated with data, the model was “all full” (as Andrea Branzi would have it), a solid block of data. As such, the research and proposal for the S.Ottavio block represents one of the earliest attempts to think of design straddling between physical and digital environments – a concept that could only be implemented through a structural approach to design whose robustness would allow it to extend to immaterial representations of space.  

Conclusions 

The work of Leonardo and Laura Mosso not only constitutes an excellent example of very early work with computers in architecture, but also provides a rich framework through which to problematise the issue of algorithmic form. The close relationship between design, philosophy, technology and politics not only forms a complex and rich agenda, but also expands the use of computers in design well beyond a functional focus on increasing efficiency and profits. Perhaps, this is one of the aspects of their work that still resonates with contemporary research on algorithmic design: the complex relationship between ideas and techniques, and the use of computation as an instrument for change. Computation was more than a vehicle to implement their radical design agenda, it was also tasked with implementing specific ethical values by orchestrating the interaction between architects, users, and built environment. In many ways, computation, and the algorithmic forms it engendered, was utilised by the Mossos to perform one of its original and most enduring tasks: to logically order things and, therefore, to conjure up an image of a future society.  

In memory of Leonardo Mosso 1926-2020.  

References 

[1] L. Mosso & L. Mosso, (1972). “Self-generation of form and the new ecology”. In Ekistics – Urban Design: The people’s use of urban space, vol.34, no.204, pp.316-322. 

[2] Deleuze’s text on Structuralism, however, was only published in 1971, so the connection between the two architects and the French philosopher is coincidental.  

[3] U. Eco, The Open Work, Translated by A. Cancogni. 1st Italian edition published in 1962. (Cambridge, Mass: Harvard University Press, 1989). 

[4] R. Bottazzi, Digital Architecture Beyond Computers: Fragments of a Cultural History of Computational Design (London: Bloomsbury Visuals, 2018). 

[5] L. Mosso & L. Mosso, “Computers and Human Research: Programming and self-Management of Form”, A Little-Known Story about a Movement, a Magazine, and the Computer’s Arrival in Art: New Tendencies and Bit International 1961-1973, edited by M. Rosen. (Karlsruhe, Germany: ZKM/Center for Art and Media; Cambridge, MA: MIT Press, 2011) 427-431. 

[6] G. Deleuze, “How Do We Recognize Structuralism?”, Desert Islands and Other Texts 1953-1974, Ed. D. Lapoujade, transl. by M. Taormina. (Los Angeles, CA: Semiotexte, 2004). Originally published in F. Chatelet (ed.)  Histoire de la philosophie vol. VIII: Le XXe Siècle. (Prasi: Hachette, 1972), 299-335. 

[7] J. Piaget, Structuralism. Translated and edited by C. Maschler. (London: Routledge and Kegan, 1971, 1st edition 1968). 

[8] E. Von Glaserfeld, “The Cybernetic Insights of Jean Piaget”, Cybernetics & Systems, 30, 2 (1999) 105-112. 

[9] J. Piaget, The Construction of Reality in the Child (New York: Basic Books, 1954; 1st Edition Neuchâtel, Switzerland: Delachaux et Nestlé, 1937) 

[10] G. Deleuze, “How Do We Recognize Structuralism?”, Desert Islands and Other Texts 1953-1974, Ed. D. Lapoujade, transl. by M. Taormina. (Los Angeles, CA: Semiotexte, 2004). Originally published in F. Chatelet (ed.)  Histoire de la philosophie vol. VIII: Le XXe Siècle. (Prasi: Hachette, 1972), 173 

[11] Ibid., 173 

[12] Ibid., 184 

[13] Ibid., 176 

[14] L. Mosso & L. Mosso, “Architettura Programmata e Linguaggio”, La Sfida Elettronica: realtá e prospettive dell’uso del computer in architettura (Bologna: Fiere di Bologna, 1969) 130-137. 

[15] L. Baccaglioni, E. Del Canto & L. Mosso, Leonardo Mosso, architettura e pensiero logico. Catalogue to the exhibition held at Casa del Mantegna, Mantua (1981). 

[16] L. Castagno & L. Mosso, ed. Paesaggio, struttura e storia: itinerari dell’architettura e del paesaggio nei centri storici della Provincia di Torino Canavese e Carignanese. (Turin: Provincia di Torino Assesorato all Cultura, Turismo e Sport, 1986). 

Suggest a Tag for this Article
disk turned steel. 1965
disk turned steel. 1965
HANS ULRICH OBRIST Interview with GETULIO ALVIANI 
discovery of light, GETULIO ALVIANI, HANS ULRICH OBRIST, immersive, raisonnée, structures
Hans Ulrich Obrist

hans-ulrich.obrist@serpentinegalleries.org
Add to Issue
Read Article: 5501 Words

10 April 2015, Milan, Miartalks

First edited transcription, Paola Nicolin 

Hans Ulrich Obrist: I would like to start right from the beginning. You told me about your uncle, but above all about the importance that Leonardo Da Vinci has always had in your work … 

Getulio Alviani: As a child, in my first years of school in Udine, the fair of Santa Caterina was held, where there were stalls with books and other things; here I came across two volumes, which I bought with the few cents I had then: one on Beato Angelico and one on Leonardo Da Vinci. I lived in the countryside back then and therefore I loved nature very much. I loved seeing birds, crickets, moles, foxes, and in this book by Leonardo there was the “bestiary.” For me, it was great, because I thought it was wonderful that a man knew all those things that I experienced daily, but that I knew absolutely nothing about. So, I fell in love with Leonardo Da Vinci, and studied his drawings in small format, because at the time there were no books with colour photographs or with enlargements. I remember a surprising thing that I always have in front of my eyes, which is how he had drawn the wind. For me, thinking that the wind could be drawn was incredible. 

From the early years of my life, I lived with two uncles, one of whom was of Austrian origin and the other born on the border with Yugoslavia. They were both over 50 years older than me, so I was always alone and surrounded only by everyday things, plants, and animals. There were those who worked as farmers, doctors, streetcleaners, carpenters … I saw them all and I wondered, for example, “who knows why someone is a carpenter?”. … I got to the point where I asked myself, “Why do I live? What am I capable of doing?” I realized then that I loved doing things with my hands, and I wanted to see. Then I began to get interested in this, and to discover, above all, that all I had in my mind were not images, but “impressions” (for example, I now look at all of you, I see you, but tomorrow I will probably not remember your faces; what I will remember is the feeling I felt, whether there was empathy or not).  

With my brain I see things; for this reason, I became interested in the world of seeing and doing, and I started by going to see, for example, how an old sculptor near my house made the plaster casts for the statues destined for the graves in the cemetery. For me, seeing was the fundamental thing: seeing and knowing – for example, that plaster becomes hot with water, that if clay dries up, it breaks – and so I began to understand what the world of doing is. I started living always like this – until I did not want to do anything anymore [he laughs], like today, where everything is distorted, distorted, and exploited, because torturers and cops have taken power. 

HUO: This idea of ​​making is very clear and we will return to it later, talking about your inventions with aluminium. But I wanted to start by imagining building your catalogue raisonnée: looking, for example, at the publications of your work, you can see that they often start with the geometric line drawings of the 1950s, and you have mentioned before the constant presence of geometry in your work. Can you tell me about these early works, these drawings that arise from the curiosity of seeing? 

GA: Mine was a series of observations, in general, but always a bit shifted. As a boy, I spent a lot of time in the studio of artisans, and then of architects – much older than me – and I went to take measurements with them and did all those things that intrigued a boy. It sometimes happened that some of them went to paint in the countryside, and painted horses, for example – even if they were actually slightly futuristic horses, like those of Marcello d’Olivo; or of Mimmo Biasi, who instead had a strong interest in vegetables, plants, which then underwent a process of abstraction. 

I have to admit that I did not know what to do, because I did not want to paint what was already there and looked perfect as it was. I wanted to catch something like the threads of light in the sky; I thought that the energy was passing in there – and I wondered how it was able to pass, because I could not see it. Then, at the time, there were the first telephone lines, so I wondered “maybe that’s how rumour travels, will the message stay the same, or be changed, and in what way?” For me, there was mystery in all this: I liked that even more, the mystery, trying to understand these things. Then I became interested in these free geometries, compositions of threads of light that crossed, intersected, overlapped – there were dozens of images in the skies of the countryside.  

However, after doing some curious work on the matter, I quit, because I thought I had exhausted the subject. I have never done things out of duty; I have done them as a game, because I have always had the pleasure of doing, of discovering, of seeing. They were, therefore, limited drawings, since I was about twenty years old at the time and everything I did was for pure pleasure. For example, in that surface [he indicates a painting from the catalogue] there is a black, but when it is hit by the light it becomes white, whiter than any other white, and this was for the light. For me, these were discoveries, thinking that the white which comes out of black is whiter than “true white.” They were conversations with matter, simple non-transcendental questions… and slowly I began to live like this.  

reflection relief with orthogonal incidence, steel. 1967, 5x480x960 cm, modules 5x80x80 cm
Figure 1 – reflection relief with orthogonal incidence, steel. 1967, 5x480x960 cm, modules 5x80x80 cm

HUO: And after this phase come the “structures.” In this, we see a lot of the world of productive work, more than the world of art. Can you tell me about this epiphany that led you to build the structures, and how you discovered aluminium? 

GA: I had participated in a competition promoted by an electrical material company in Brescia (AVE – ed.) And I had designed a valve which, compared to the previous ones, was very innovative. The prize, announced by Domus, was awarded to the architecture studio, but they told me that whoever designed the valve could go to work for the company that organized the competition, to follow the production phase. So I went to Vestone (a town in the province of Brescia – ed.), where the factory was based, and there I discovered the world of more “committed” work. Because until then, for me, the world had been one of “craftsmanship”; there instead I learned a world of “doing”, with large machines, industrial materials, and many people involved. And there among the little things, I discovered new worlds, from melamine to silver contacts, from castings to presses – because I took care of both the execution of this first project of mine, and took on the role of graphic designer for the company’s product catalogues. In this context, I found myself for the first time handling aluminium pieces coloured green, red, and yellow – which were basically mirrors. Having seen these perfect mirrors in metal was a surprising innovation. I said to myself, “but how does this mirror work?” Of course, I knew why the mirror reflected, but never had I thought about the fact that a mirror might not be able to break, or even bend.  

Then, in one of these small workshops that I attended in the province of Udine, I went to dig with some cutters under this mirror, to see what was there. Initially it was all black, with a strong smell of sulphur, but I persisted again, and then a blinding light came out, stronger than sunlight! And from there, I understood how important light was, and that this material could accelerate light, just as a lens causes the sun’s rays to burn the ground.  

HUO: You always have a lens and a measuring tape with you, right? 

GA: I have two friends, who are the greatest friends I’ve ever had in life, I always have them with me, and they are the lens and the ruler. They have never betrayed me, they are always calm, safe and make no mistake.  

HUO: This is now where we can talk about the “discovery of light”. The interesting thing is that this research does not initially enter the world of art in Italy, but instead makes a first unexpected appearance passing through Ljubljana and Zagreb. I’m interested in this passage, because when I was a student I met Julije Knifer in Sète, France, where the artist had retired in the 90s, and he talked to me a lot about the Gorgona. You, Getulio Alviani, were there, at the moment of the birth of that movement, so I would like to understand how this meeting of extraordinary characters took place. 

GA: I was very attracted to Eastern [European] countries, because I have a mania for difficult things, those things that others don’t do. Everyone can do the easy things. Going to Paris, for example, was very simple, but going to Yugoslavia was quite another story. Everything was different there, even the smell of the air.  

My motivation was due a little to the fact that these countries were representatives of Central Europe, the land that my uncle, who was born in Austria, came from, and on the other hand I was fascinated by this completely different world, then beyond the “curtain” – for example, to get a visa took months, you had to have valid reasons (which in my case were linked to family reasons, since my mother and my aunt were born in places that became Yugoslavia). The roads were different, the people as well … in short, Yugoslavia at the time was another world. Furthermore, I must admit that unlike all other parts of the world, where there was a certain atmosphere of joy and lightness, Yugoslavia was a more introverted, more reflective, more intimate, and poorer land. I like poverty a lot, because in poverty many things can be solved; while in wealth nothing is ever solved – contrary to what today’s rulers think, who aim at riches, their riches, to pretend to solve problems. Problems are solved when there is simplicity and brains, and things are done for the sake of others; while today there is a lot of imbecility combined with wickedness that only causes abuse.  

So, I landed in Slovenia. I had made two small surfaces of milled aluminium, and placed them on a radiator in a small workshop, where they were noticed by Zoran Krzisnik, who came to this workshop to have furniture made. At the time, he was the director of the GAM in Ljubljana – which was very advanced in the world; it was the first city beyond the Iron Curtain to want to do innovative things, while elsewhere the situation was very stale. So Zoran Krzisnik saw these two little things, two small plates in fact, and asked me what they were. I wasn’t sure what to tell him, so I told him how I had made them. He asked me if it was possible to make some larger ones, about one metre by one metre, and that if I could he would hold a small exhibition for a small gallery he had in Ljubljana. It was called Mala Galerija, which means precisely that: small gallery. He invited me to visit it, and then organized an exhibition. And some time later, in 1961, I made this presentation, and then learned that in the meantime Krzisnik had curated exhibitions by Zoran Mušič, Giuseppe Santomaso, artists from the Ecole de Paris, and many others. Since then, these works of mine have allowed me to live in Eastern Europe For some time. 

I have continued to have a great love for crossing the border, going beyond: Slovakia, Poland, Lithuania, up to Russia. I learned from Krzisnik that at that time, in Zagreb, there were other young people exhibiting things similar to mine during the same period. So I went to Zagreb and set out to find out what was happening, and if the work was like mine. But at Gradska Galerja I found very different pieces; they had a spirit similar to mine, yet were completely different things, and so I saw the work of Almin Mavignier, Julio Le Parc, François Morellet, Marc Adrian, Ivan Picelj, and Julije Knifer. It was the “New Trends” exhibition, organized for the first time by an artist, Almin Mavignier. There, the whole world opened up for me. Krzisnik was organizing the Biennale of graphics at the time, which was at the forefront of the world of graphics, and therefore many scholars – such as Umbro Apollonio, Giulio Carlo Argan and many others – arrived in Ljubljana. In Udine that would never have occurred; the director of the Tate, or of the Moscow museum, or Umberto Eco arriving. Instead, I met everyone there, in Ljubljana, in a moment, and that world became my second home.  

It was in this context that a young person was listened to for what he was capable of doing, which I thought could never have happened in Italy. For example, the Studentski Centar in Zagreb [The Student Center] was a large experimental centre run by artists and critics, directed by Brano Horwett. There, they invited me to create silk screen works, and so I started to print them – not even knowing what they were exactly, but obtaining surprising results of crossed, overturned, superimposed, negativized, positivized lines. Then, when I came to Milan (where the headquarters of the factory I worked for were) I was able to show this kind of research to Lucio Fontana, and then to Paolo Scheggi, and they too began to work with this technique. Then Brano Horwett came to the Galleria del Deposito to develop all these graphic techniques, which in Italy had never even been thought to exist. We were involved in the fact that serigraphy could be done in series, and everyone – Max Bill, Richard Paul Lohse, Konrad Wachsmann, Victor Vasarely – explored this field, which was born from [the East]. And this is interesting.  

cube with graphic texture opalescent pvc sheets, silkscreen and light. 1964-69, 330x330x300 cm
Figure 2 – cube with graphic texture opalescent pvc sheets, silkscreen and light. 1964-69, 330x330x300 cm

HUO: One of the important aspects in interviews is that of “protesting the forgetfulness that exists in the world”, and there is a character who is rarely talked about today but who is very important: the person who set up the exhibition. The exhibition itself is often forgotten, there is an amnesia in the art world about it. I would like it if you told us a little about Edo Kovačević and what you learned from him. 

GA: I learned everything from him. He was a figurative painter who took care of the installations in the Gradska Galerija in Zagreb; before then I had never thought that my works could be exhibited like this, suspended, supported, and so on. I thought they were simply “squares”. In fact, when I then held an exhibition of mine at Gradska, my works were about twenty “little things”, but he turned them into an eight-room exhibition, making them extraordinary – not through “effects”, as might happen today by focusing lights on them, but simply by placing one work on a background, one on a base, one as a small backdrop: and so with three surfaces, a room was set up.  

Kovačević was very simple and creative, I learned a lot from him – and, in fact, I have never had a work hung on my walls at home. I keep them in the garage, because the works have to be exhibited for a short time, otherwise the eye gets used to them and you can’t see them anymore.  

I look at the works for a short time and then put them aside, to then retrieve them months later and try to understand if they are still valid or not. My impression is that the works must be done for exhibitions, so that they communicate with each other: one must see number one, number two, and understand what they mean as one line. This is what I still do now. On the other hand, I have set up more exhibitions of my colleagues work than of mine, because in this way I really discover the works, what they are and what they represent. 

I believe that the works must be kept in the head. I have a collection of works myself, but I never see them. I got them all by making exchanges: Fontana to Bill, Lhose, Albers, Mansurof, to Nelson, Kelly or Anuszkiewicz….  

The first exchange was in the early sixties, with Fontana: he asked me for something, I brought it to him and he said to me: “What do you want [for it]?” and I replied that I did not want anything, but timidly I proposed that he give me one of his works – and so it happened immediately. From then, I received everything through exchange. This then also enabled me to hold exhibitions of those artists, because I had so many works in hand: everything was possible because I had the works, avoiding transport and all the tasks required to make an exhibition that back then seemed insurmountable.  

HUO: All of this leads to your work as a curator. Andrea Bellini, who has been talking to me about your work for many years and is the origin of my research, was insistent that we talk about you as a curator. You are “the” curator of programmed art, and you have also written a lot about your colleagues, so it would be interesting if, after Ljubljana and Zagreb, we now arrive in Italy, with the N Group, and Programmed Art.  

GA: Immediately after the exhibition with Zoran Krzisnik in that small gallery, he asked me to curate a selection of works by our group of artists for the Ljubljana Biennale. So I began to collect works by those I esteemed – because otherwise I would not have had any interest: I wondered if the artist should not exist, but only the work; if it had, as it must have, a meaning and a dignity of its own to exist. And so I curated the Ljubljana Biennale. Later, I spent many years in Venezuela, directing the Jesus Soto Museum.  

HUO: Soto told me about this abandoned museum in Ciudad Bolivar and I would be interested in understanding how an artist experiences a museum in a curatorial sense. What is your vision of that today? 

GA: Exhibitions were held, and in this way I was able to see the cities and meet those who, perhaps because of their age, would not be able to do it in the future. There was always someone who hosted me. Jesús Rafael Soto was a close friend of mine, I often went to stay with him in Paris, or with his fellow Venezuelan, Otero. One day, he told me that he intended to build a large museum, and asked me to collaborate with him by gathering all the artist friends I could. So I did: from Sérgio de Camargo to Toni Costa, to Lucio Fontana, Gianni Colombo and many other good artists. 

I could not go to the inauguration, but then, after a few years, Soto called me and told me that his museum was in ruins: “se lo comiendo el diablo” [the devil is eating it], and asked me to go and see the situation, and give him a handrestoring it. So, during a Holy Week in the 1980s, I went there and saw this museum – designed by Raul Villanueva, a good architect and friend of Le Corbusier. The museum consisted of a series of huge pavilions, located in the middle of the savannah. Unfortunately, the situation was terrible; there were bats, snakes inside, the works had been ruined and were mouldy on the walls. There were about forty people who worked there: photographers, guides … and so it was that I lived in Venezuela for four or five years and worked to completely renovate it. 

HUO: Regarding Soto, and other Venezuelan artists who work a lot on the kinetic, there is one thing we haven’t talked about yet, and that is your surfaces. At a certain point, the series of “vibrating texture” surfaces begins. In a conversation with Giacinto di Pietrantonio, you said that it would be nicer to think that “neon has chosen Flavin, mirrors Pistoletto, and aluminium has chosen me”. Why did you switch from aluminium to vibrated surfaces? 

GA: Actually, after having been the art director of an aluminium factory, I had perfect, wonderful machinery at my disposal. I’ve never had a studio; I worked where they were: if, in a particular place, there was a nice factory that produced a nice material, I went there and did something. And so, being in the aluminium industry, I had these perfect tools at my disposal. That’s how it all started. I must admit that I have always done everything by myself, because at the time everything was possible: I was alone in a factory of thousands of square metres, I was alone and I was happy; I liked doing. Today, all of this would be impossible, but back then it was natural to do whatever your brain told you to do.  

HUO: In the book New trends: Notes and memories of kinetic art by a witness and protagonist, you write that the artist “is not the cult of personality, protagonism, commercialization, private galleries, elite art, fetishism, the unique work, the social purpose, the interpretation, the metaphor, the mystification, the strategy […]”. In another text I found you say that “to be called an artist is an offense, one could always speak of artifice, of something new, but I think it is more correct to speak of a plastic creator, a designer, a student of perceptual problems, an artist is synonymous of mystifier”. I would like you to tell me about your “expanded notion of the arts”… 

GA: Since I’m a physicist, I don’t like telling stories. [I don’t like] the word “creator” … lies are “created”; they are very easy to create. To be able to say things, they ought to be verifiable, tangible. If someone tells me “on your surface the light behaves like this”, you can go and see it, and you have the opportunity to see that it is true that it behaves like this. That’s not like someone who throws a stain on the ground, and then that becomes, say, “the intolerability of social life”. They say imagined things! 

Therefore, I love things, and I care that they have the dignity to exist; as for me, I have nothing to do with it; they must have the dignity of existing. Nobody knows who invented reinforced concrete, paper, the first bricks; nobody knows anything, but these objects exist and have been made. Everything has been done, things remain and, fortunately, people leave.  

One of my favourite things is to exhibit colleagues who are better than me; partly out of gratitude, because in this way I make them continue to live, and partly because in this way they have no other influences. For example, when I started collaborating with the museum in Bratislava, an exhibition relationship that lasted about ten years, I exhibited only artists who are gone: Sonia Delaunay, Joseph Albers, Lucio Fontana, Bruno Munari, Olle Baertling, Max Bill, all of whom represented something fundamental in the art world through art, and not through words or stories. The stories may be right, but they weaken the function of the eye: we receive 90% of our information through the eye; if I had to speak what I have in front of me in the blink of an eye, I would spend years saying nothing, telling unlikely stories. On the contrary, in a split second, I see everything, and everything is verifiable. One of my passions is synthesis, so it is obvious that I love the eyes. For me the eyes are everything. 

disk turned steel. 1965
Figure 3 – disk turned steel. 1965

HUO: This is beautiful and could already be a conclusion, but I still have some urgent questions. In fact, when you talk about the synthesis of art, you make me think of Max Bill… 

GA: Max Bill has been a lot, everything, to me. We often saw each other in Zurich or Zumikon or in other parts of the world. We didn’t talk [much], we communicated with synthetic words. But when we talked, the topics were quite another thing [compared to art]. We telephoned on Sundays. I always knew, ten minutes before our call, that I was dumber than I would be afterwards – with regards to everything we talked about, his turtles, the roads, the travels, everything. Because whatever Bill told me, he opened my brain, like Vix VapoRub. He was my base, his was a total critical force, first of all towards himself: [he believed that] something that was not true had no right to exist.  

HUO: And like Max Bill, who was an artist, architect, and educator with the Ulm school, you too have continued to be a designer, architect… 

GA: Yes, but never as a profession. I have done sets, some residences, a boat, I have dealt with urban planning; but I am not a craftsman, much less able to reap any benefits that were not mental. 

HUO: You have also done graphic design, for example creating [work for] Flash Art. 

GA: [Giancarlo] Politi came to me and showed me a copy of Flash Art, which at the time was innovative because at the time there was only Selearte, a magazine that devoted very little space to modern art, just a few quotes. Giancarlo, on the other hand, had made this magazine, which in the first issue had the title in “football pools” [font]; so, from the second issue, I gave him the logo again, all in lowercase Helvetica. Throughout my life, I have made many posters, layouts, catalogues, everything that had to do with graphics.  

HUO: You started making more “immersive” installations, such as those with mirrors, and many environments, so … in a certain sense architecture and setting are synthesised in your work.  

GA: Yes. For example, in this environment [he points to a photo from the book], you literally enter the middle of the colours, but in reality they are not there, the only colours are the fixed ones of the walls. By touching the metal plates that reflect the colours, yellow becomes black, red becomes yellow and everything is mixed and the resulting images are unrepeatable. There are no engines, because I’ve never loved engines. Instead, I love that the brain sets itself in motion. 

HUO: There is also the “tunnel” which is very nice, can you tell me about this job? 

GA: Do you know, I saw this work for the first time a couple of years ago, even though it was made about twenty years ago. I went to the place with Mario Pieroni and Giacinto Di Pietrantonio and they told me that they had a series of abandoned spaces. They asked me what I would do with them, and I replied that I would make lines. I made a drawing. They then had a guy make it, who was pretty good at it.  

HUO: You told me before the conference that it’s also important to have fun, and today many artists work on games. You invented a game, in 1964, using aluminium plates, didn’t you? 

GA: It’s a very simple thing. There are two aluminium plates that rest on a surface and then there are two discs which, by reflecting, multiply. Unpredictable images can be generated, but only with the hands. And we are always surprised by what we ourselves do.  

HUO: In my interviews, I often ask what the unrealized project is. There are many categories of unrealized projects, those that are too big, utopian, censored, too expensive… which one is yours? 

GA: I must admit that my restlessness is always animated by what surrounds me. I have never had a studio, much less an assistant, as Karl Gerstner or Enzo Mari or Victor Vasarely or Julio Le Parc or François Morrellet may have … although very good, they all have had and have real businesses, but I did everything by myself – and above all, I did it … for years, and [I don’t do it] anymore because I no longer find pleasure in doing it. 

In 1970, I composed the Manifesto on the “Pneumatic” Space. You will understand that it is absurd that a bus always measures from 100 to 200 cubic metres, both when it is full of people and when it is empty, or that a car occupies 5 square metres both when it is stopped and when it is in movement. Absurd! It is a hallucinatory thing. Although I love the cars on the highways, seeing the city submerged by what I call obscene, ugly, frightening “bagnarole [bathtubs] di tin and stucco” is terrible. Cars must be in motion, because otherwise they wouldn’t be called cars, they’d be called something else. My concern, therefore, lies in trying to minimize the obstruction and presence of the cars when they are not working: this is the Tire Space. I dream that the spaces could be pneumatic, transformable, transportable from one place to another. It was the first impression I had from Konrad Wachsmann, who I attended in Genoa when he had to design the port (a project that was then given to another person in his stead). Wachsmann had an idea to make the port of Genoa expandable and shrinkable: are the boats coming? It expands. It’s empty? It shrinks. Is there no longer any need for the port? I undo it and take it elsewhere. The pneumatic world, for Wachsmann, is still to come, and I took this position a little from him. I haven’t invented anything; I use things that were already there, and I always give credit to people before me. Bill, Albers, Wachsmann, Gropius; everyone who came before me. … In this way, it is a continuation, because no [new] thing is born without another [that goes before].  

So my future is Pneumatic Space, but to achieve it you need a common will; that is, that everyone is interested. I can make drawings, I have reduced very small spaces to a minimum; you can live in 9 square metres – I have designed a living room for two people which contains everything you need and which is transformable. I like this. In the 60s, I made tables that transform, today we have to remove gravity, so we won’t even need the table anymore. Back then, the table was the solution, today we know we can remove gravity, so the table is no longer needed.  

HUO: Last question. Rainer Maria Rilke wrote that beautiful text in which he gave advice to a young poet. Today there are many young artists here with us. I am very curious to know what your advice is to a young artist in 2015.  

GA: Knowing everything that has been done. Develop intelligence, and try to do something that has the dignity of existing, or that is itself useful.  

She [the work] is the centre, you have to think about what she does: and she has her dignity only if she is not a copy, only if you have made sure that she is absolutely new. Not just for a small circle of people who may not know what is around and are amazed. Today there is a great, terrible crisis: ignorance. And here we are in the homeland of this ignorance … we buy obscene, false, ugly, stupid things. Però in fondo, anche se questa cosa qualche anno fa mi disturbava, adesso mi lascia sereno, perché vuol dire che l’ignoranza di quella gente riceve quello che si merita e qui penso proprio “all’arte”, quella che non avrei mai voluto sapere esistere 

(But in the end, even if this thing bothered me a few years ago, now it leaves me calm, because it means that the ignorance of those people receives what they deserve, and here I think about “art”, the one I never wanted to know exists.)1 

Suggest a Tag for this Article
algorithmic form, 2021
algorithmic form, 2021
Introduction to Issue 02: Algorithmic Form
Algorithmic Form, Architecture, Architecture Theory, curatorial note, Philosophy
alessandro bava

thealessandrobava@gmail.com
Add to Issue
Read Article: 640 Words

I was asked by Mollie Claypool to curate the second issue of Prospectives Journal as an ideal follow up to leading Research Cluster 0 at B-Pro in the academic year 2020/21. As such, this issue is a collection of positions that respond to my research interest during that year. 

In fact, my initial objective with RC0 was to research ways of applying computational tools to housing design for high-rise typologies: the aim was to update modernist housing standardisation derived from well-established rationalist design methodologies based on statistical reduction (such as in the work of Alexander Klein and Ernst Neufert), with the computational tools available to us now.

While the outcomes of this research were indeed interesting I was left with a sense of dissatisfaction, because it was very difficult to achieve architectural quality using purely computational tools – in a sense I felt that this attempt at upgrading modernist standardisation via computation didn’t guarantee better quality results per se, beyond merely complexifying housing typology and offering a wider variety of spatial configurations. 

In an essay I published in 2019 (which in many ways inspired the curation of this Journal), I declared my interest to be in the use of computational tools not for the sake of complexity – formal or programmatic – but for increasing architectural quality, while decrying that the positions expressed by the so-called first and second digital revolutions, at the level of aesthetics at least, seemed too invested in their own self-proclaimed novelty. My interest was in rooting them in a historical continuum, with established architectural methodologies; seeing computational design as an evolution of rationalism. 

This is why I wanted this journal to be about architectural form, and not about technical aspects of computational design: there is an urgent need to discuss design traditions connected to computational design, as an inquiry on “best practices” – that is, historical cases of what an algorithmic form has been and can be. 

Any discussion on architecture implies a twin focus, on the one hand, on the technical aspects of construction and the tools of design, and on the other, on how these are interpreted and sublimated by the artistic sensibility of an author. Ultimately, what’s interesting about architecture as the discipline of constructing the human habitat is how it is capable of producing a beautiful outcome; and in architecture, perhaps more than any other practice, the definition of beauty is collective. To be able to establish what’s beautiful, we need to develop common hermeneutic tools, which – much like in art – must be rooted in history. 

In light of this, I’m delighted with the contributions to this Journal, which offer a concise array of historical and contemporary positions that can help construct such tools. Many of the essays presented here offer a much needed insight into overlooked pioneers of algorithmic form, while others help us root contemporary positions in an historical framework – thus doing that work necessary for any serious discipline, technical or artistic, of weaving the present with the past.

My hope is that those individuals or academic institutions who are interested in how we can use emerging computational tools for architecture can re-centre their work not just on tooling and technical research but on architectural form, as the result of good old composition and proportion. The time is ripe, in my view, for bridging the gap between computational fundamentalists who believe in the primacy of code, and those with more conservative positions who foreground good form as the result of the intuition and inclination of a human author, remembering that an architectural form is only interesting if it advances the quality of life of its inhabitants and continues to evolve our collective definitions of beauty.  

algorithmic form, 2021
algorithmic form, 2021
Suggest a Tag for this Article
Collage of Isa Genzken's work
Collage of Isa Genzken’s work
The Algorithmic Form in Isa Genzken
Algorithmic Form, assemblage, attention economy, Collage, data architecture, hooks, Isa Genzken, montage, Social Architecture, social object, social science, surrealism
Provides Ng

provides.ng.19@ucl.ac.uk
Add to Issue
Read Article: 4201 Words

What’s the Hook? Social Architecture? 

Isa Genzken’s work can be seen as a synthesis of the “social” and the “object” – a visual-sculptural art that reflects on the relationship between social happenings and the scale of architectural space. She was also one of the early explorers in the use of computation for art, collaborating with scientists in the generation of algorithmic forms in the 70s. But what is the social object? What can it mean for architecture? Just as Alessandro Bava, in his “Computational Tendencies”,[1] challenged the field to look at the rhythm of architecture and the sensibility of computation, Roberto Bottazzi’s “Digital Architecture Beyond Computers”[2] gave us a signpost: the urgency is no longer about how architectural space can be digitised, but ways in which the digital space can be architecturised. Perhaps this is a good moment for us to learn from art; in how it engages itself with the many manifestations of science, while maintaining its disciplinary structural integrity. 

Within the discipline of architecture, there is an increasing amount of research that emphasises social parameters, from the use of big data in algorithmic social sciences to agent-based parametric semiology in form-finding.[3] [4] The ever-mounting proposals that promise to apply neural networks and other algorithms to [insert promising architectural / urban problem here] is evidence of a pressure for social change, but also of the urge to make full use of the readily available technologies at hand. An algorithm is “a process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer”.[5] It is a finite sequence, well-defined, with performance based on the length of code – how fast and best can we describe the most. In 1975, Gregory Chaitin’s formulation of Algorithmic Information Theory (AIT) reveals that the algorithmic form is not anymore what can be visualised on the front-end, but “the relationship between computation and information of computably generated objects, such as strings or any other data structure”.[6] In this respect, what stands at the convergence of computable form and the science of space is the algorithmic social object. 

Figure 1 – Algorithmic Social Science Research Unit (ASSRU) and Parametric Semiology – The Design of Information Rich Environments. Image source: ASSRU, Patrik Schumacher.  

Social science is the broad umbrella that encompasses disciplines from history and economics, to politics and geography; within which, sociology is a subset that studies the science of society.[7] The word ‘sociology’ is a hybrid, coined by French philosopher Isidore Auguste Comte in 1830 “from Latin socius ‘associate’ + Greek-derived suffix –logie”; more specifically, “social” as the adjective dates from the 1400s, meaning “devoted to or relating to home life”; and 1560s as “living with others”.[8] The term’s domestic connotation soon accelerated from the realm of the private to the public: “Social Contract” from translations of Rousseau in 1762; “Social Darwinism” and “Social Engineering” introduced by Fisher and Marken in 1877 and 1894; “Social Network” and “Social Media” by the late 20th century from Ted Nelson. Blooming during a high time of the Enlightenment and the rise of the positivist worldview, sociology naturally claims itself to be a science, of scientific methods and empirical investigations. The connotation of –logie has been brilliantly attested by Jonathan Culler:[9] 

Traditionally, Western philosophy has distinguished ‘reality’ from ‘appearance’, things themselves from representations of them, and thought from signs that express it. Signs or representations, in this view, are but a way to get at reality, truth, or ideas, and they should be as transparent as possible; they should not get in the way, should not affect or infect the thought or truth they represent.” 

To claim a social study as a science puts forward the question of the relationship between the language that is used to empirically describe and analyse the subject with the subject matter itself. If it should be objectively and rationally portrayed, then the language of mathematics would seem perfect for the job. If we are able to describe the interaction between two or more people using mathematics as a language, then we may begin to write down a partial differential equation and map the variables of it.[10] Algorithms that are inductively trained on evidence-based data do not only seem to capture the present state of such interaction, but seem also able to give critical information in describing the future evolution of the system. This raises the question of computability: what is the limit to social computation? If there is none, then we might as well be a simulation ourselves; so the logic goes that there must be one. To leave an algorithm running without questioning the limits to social computation is like having Borel’s monkey hitting keys at random on a typewriter, or to apply [insert promising algorithm here] arbitrarily for [insert ear-catching grand challenges here].   

Figure 2– Borel’s infinite monkey theorem in 1913. Image source: Wikipedia. 

What’s the hook? 

A hook “is a musical idea, often a short riff, passage, or phrase, that is used in popular music to make a song appealing and to catch the ear of the listener”.[11] It is a monumental part of Web 2.0 that takes user attention as a scarce resource and a valuable commodity – an attention economy. Music is an artform that takes time to comprehend; as it plays through time, it accrues value in your attention.  

Figure 3 – Drum beat to Empire State of Mind, Nick’s Drum Lessons, “‘Empire State of Mind’ Jay Z – Drum Lesson”, October 5, 2014 

This is one of the most famous hooks of the late 2000s – Empire State of Mind came around the same time as the Web 2.0 boom, just after New York had recovered from the dotcom bubble. The song was like an acoustic montage of the “Eight million stories, out there in the naked’, revealing an underlying urge for social change that was concealed by the boom; just as we see Jay-Z in Times Square on stage under the “big lights that inspired” him rapping: “City is a pity, half of y’all won’t make it”.[12] It was an epoch of R&B, rhythms of cities, of the urban sphere, of the hightech low life. Just the first 15 seconds of Jay-Z’s beat is already enough to teleport a listener to Manhattan, with every bit of romanticism that comes with it. The Rhythms and the Blues constructed a virtual space of narrative and story-telling; such spatial quality taps into the affective experiences of the listener through the ear, revealing the urban condition through its lyrical expression. It is no accident that the 2000s was also a time when the artist / sculptor Isa Genzken began exploring the potential of audio in its visual-sculptural embodiment.  

The ear is uncanny. Uncanny is what it is; double is what it can become; large [or] small is what it can make or let happen (as in laisser-faire, since the ear is the most [tender] and most open organ, the one that, as Freud reminds us, the infant cannot close); large or small as well the manner in which one may offer or lend an ear.” — Jacques Derrida.[13] 

Figure 4 – “Ohr”, Isa Genzken, since 2002, Innsbruck, City Hall facade, large format print on flag fabric, 580 x 390 cm. Photograph, galeriebuchholz 

An image of a woman’s ear was placed on a facade by Genzken, personifying the building as a listener, hearing what the city has to say. At the same time, “The body is objectified and made into a machine that processes external information”.[14] The ear also symbolises the power of voice that could fill a place with a space: an acoustic space. As much as a place is a location, geographically tagged, and affects our identity and self-association of belonging; a space can be virtual as much as it can be physical. Such a space of social interaction is now being visualised on a facade, and at the same time, it is being fragmented: “To look at a room or a landscape, I must move my eyes around from one part to another. When I hear, however, I gather sound simultaneously from all directions at once: I am at the centre of my auditory world, which envelopes me. … You can immerse yourself in hearing, in sound. There is no way to immerse yourself similarly in sight”.[15] This is perhaps a prelude to augmented virtual reality.  

Figure 5 – The Surrealist doctrine of dislocation, the romantic encounter of urban objects is “as beautiful as the chance meeting of a sewing machine and an umbrella on an operating table.” – Lautréamont, Canto VI, Chapter 3. (a) The cover of the first edition of the Rem Koolhaas’ book Delirious New York, designed by Madelon Vriesendorp. (b) A photograph of New York by Isa Genzken, New York, N.Y., 1998/2000, Courtesy Galerie Buchholz, Berlin/Cologne. (c) A photography by Man Ray 1935 © The Man Ray Trust / ADAGP, Paris and DACS, London 

As much as Genzken is interested in the ‘‘exploration of contradictions of urban life and its inherent potential for social change”, Rem Koolhaas shared a similar interest in his belief that it is not possible to live in this age if you don’t have a sense of many contradictory voices.[16] [17] What the two have in common is their continental European roots and a love for the Big Apple – Genzken titled her 1996 collage book “I Love New York, Crazy City”, and with it paid homage to her beloved city. Delirious New York was written at a time when New York was on the verge of bankruptcy, yet Koolhaas saw it as the Rosetta Stone, and analysed the city as if there had been a plan, with everything starting from a grid. It was Koolhaas’ conviction that the rigor of the grid enabled imagination, despite its authoritative nature: unlike Europe, which has many manifestos with no manifestation, New York was a city with a lot of manifestation without manifesto. 

Koolhaas’ book was written with a sense of “critical paranoia” – a surrealist approach that blends together pre-existing conditions and illusions to map the many blocks of Manhattan into a literary montage. The cover of the first edition of the book, designed by Madelon Vriesendorp, perfectly captures the surrealism of the city’s socio-economy at the time: the Art Deco skyscraper Chrysler Building is in bed with the Empire State. Both structures were vying for distinction in the “Race into the Sky” of the 1920s, fueled by American optimism, a building boom, and speculative financing. [18] Just as the French writer Lautréamont wrote: “Beautiful as the accidental encounter, on a dissecting table, of a sewing machine and an umbrella”, surrealism is a paradigmatic shift of “a new type of surprising imagery replete with disguised sexual symbolism”[19] The architectural surrealism manifested in this delirious city is the chance encounter of capital, disguised as national symbolism – an architectural hook.  

Data Architecture 

Figure 6 – China Central Television Headquarters (CCTV) and Genzken’s Gate for Amsterdam Tor für Amsterdam, Außenprojekte, Galerie Buchholz, 1988.

Genzken’s sense of scale echoes Koolhaas’ piece on “bigness” in 1995. Her proposal for the Amsterdam City Gate frames and celebrates the empty space, and found manifestation in Koolhaas’ enormous China Central Television’s (CCTV) Beijing headquarters – a building as a city, an edifice of endless air-conditioning and information circularity wrapped in a structured window skin, hugging itself in the air by its downsampled geometry of a mobius loop. Just as Koolhaas pronounced, within a world that tends to the mega, “its subtext is f*** context”. One is strongly reminded of the big data approach to form-finding, perhaps also of the discrete spatial quality coming from Cellular Automata (CA), where the resolution of interconnections and information consensus fades into oblivion, turning data processing into an intelligent, ever mounting aggregation. In the big data–infused era, the scale boundary between architecture and urban design becomes obscured. This highlights our contemporary understanding of complex systems science, where the building is not an individual object, but part of a complex fabric of socioeconomic exchanges. 

Figure 7 – The Bartlett Prospective (B-pro) Show, 2017. 

As Carpo captured in his Second Digital Turn, we are no longer living in Shannon’s age, where compression and bandwidth is of highest value: “As data storage, computational processing power, and retrieval costs diminish, many traditional technologies of data-compression are becoming obsolete … blunt information retrieval is increasingly, albeit often subliminally, replacing causality-driven, teleological historiography, and demoting all modern and traditional tools of story-building and story-telling. This major anthropological upheaval challenges our ancestral dependance on shared master-narratives of our cultures and histories”.[20] Although compression as a skillset is much used in the learning process of the machines for data models, from autoencoders to convolutional neural networks, trends in edge AI and federated learning are displacing value in bandwidth with promises of data privacy – we no longer surrender data to a central cloud, instead, all is kept on our local devices with only learnt models synchronising. 

Such displacement of belief in centralised provisions to distributed ownership is reminiscent of the big data-driven objectivist approach to spatial design, which gradually displaces our faith in anything non-discursive, such as norms, cultures, and even religion. John Lagerwey defines religion in its broadest sense as the structuring of values.[21] What values are we circulating in a socio-economy of search engines and pay-per-clicks? Within trends of data distribution, are all modes of centrally-provisioned regulation and incentivisation an invasion of privacy? Genzken’s work in urbanity is like a mirror held up high for us to reflect on our urban beliefs.  

Figure 8 – Untitled, Isa Genzken  2018, MDF, brass fixings, paper, textiles, leather, mirror foil, tape, acrylic paint, mannequin, 319.5 x 92.5 x 114 cm. David Zwirner, Hong Kong, 2021.

Genzken began architecturing a series of “columns” around the same time as her publication of I Love New York, Crazy City. Evocative of skyscrapers and skylines that are out of scale, she named each column after one of her friends, and decorated them with individual designs, sometimes of newspapers, artefacts, and ready-made items that reflect the happenings of the time. Walking amongst them reminds the audience of New York’s avenues and its urban strata, but at 1:500. Decorated with DIY store supplies, these uniform yet individuated structures seem to be documenting a history of the future of mass customization. Mass customisation is the use of “flexible computer-aided manufacturing systems to produce custom output. Such systems combine the low unit costs of mass production processes with the flexibility of individual customization”.[22] As Carpo argued, mass customisation technologies would potentially make economies-of-scale and their marginal costs irrelevant and, subsequently, the division-of-labour unnecessary, as the chain of production would be greatly distributed.[23] The potential is to democratise the privilege of customised design, but how can we ensure that such technologies would benefit social goals, and not fall into the same traps of the attention economy and its consumerism?  

Refracted and reflected in Genzken’s “Social Facades” – taped with ready-made nationalistic pallettes allusive of the semi-transparent curtain walls of corporate skyscrapers – one sees nothing but only a distorted image of the mirrored self. As the observer begins to raise their phone to take a picture of Genzken’s work, the self suddenly becomes the anomaly in this warped virtual space of heterotopia.  

Utopia is a place where everything is good; dystopia is a place where everything is bad; heterotopia is where things are different – that is, a collection whose members have few or no intelligible connections with one another.” — Walter Russell Mead [24] 

Genzken’s heterotopia delineates how the “other” is differentiated via the images that have been consumed – a post-Fordist subjectivity that fulfils itself through accelerated information consumption.  

Figure 9 – Attention economy and social strata as refracted and reflected in (a) “Soziale Fassade”, Isa Genzken, 2002, Courtesy Galerie Buchholz, Berlin/Cologne, and (b) “I shop therefore I am”, Barbara Kruger, 1987 

The Algorithmic Form 

Genzken’s engagement with and interest in architecture can be traced back to the 1970s, when she was in the middle of her dissertation at the academy.[25] She was interested in ellipses and hyperbolics, which she prefers to call “Hyperbolo”.[26] The 70s were a time when a computer was a machine that filled the whole room, and to which a normal person would not have access. Genzken got in touch with a physicist, computer scientist Ralph Krotz, who, in 1976, helped in the calculation of the ellipse with a computer, and plotted the draft of a drawing with a drum plotter that prints on continuous paper.[27] Artists saw the meaning in such algorithmic form differently than scientists. For Krotz, ellipses are conic sections. Colloquially speaking, an egg comes pretty close to an ellipsoid: it is composed of a hemisphere and half an ellipse. If we are to generalise the concept of conic section, hyperbolas also belong to it: if one rotates a hyperbola around an axis, a hyperboloid is formed. Here, the algorithmic form is being rationalised to its computational production, irrelevant of its semantics – that is, until it was physically produced and touched the ground of the cultural institution of a museum. 

The 10-meter long ellipse drawing was delivered full size, in one piece, as a template to a carpenter, who then converted it to his own template for craftsmanship. Thus, 50 years ago, Genzken’s work explored the two levels of outsourcing structure symbolic of today’s digital architectural production. The output of such exploration is a visual-sculptural object of an algorithmic form at such an elongated scale and extreme proportion that it undermines not only human agency in its conception, but also the sensorial perception of 2D-3D space.[28] When contemplating Genzken’s Hyperbolo, one is often reminded of the radical play with vanishing points in Hans Holbein’s “The Ambassadors”, where the anamorphic skull can only be viewed at an oblique angle, a metaphor for the way one can begin to appreciate the transience of life only with an acute change of perspective.  

Figure 10. (a) ‘The Ambassadors’, Hans Holbein, 1533. (b) “Hyperbolos”, Genzken, 1970s. Image source: Andrea Albarelli, Mousse Magazine

When situated in a different context, next to Genzken’s aircraft windows (“Windows”), the Hyperbolo finds association with other streamlined objects, like missiles. Perhaps the question of life and death, paralleling scientific advancement, is a latent meaning and surrealist touch within Genzken’s work, revealing how the invention of the apparatus is, at the same time, the invention of its causal accidents. As the French cultural theorist and urbanist Paul Virilio puts it: the invention of the car is simultaneously the invention of the car crash.[29] We may be able to compute the car as a streamlined object, but we are not even close to being able to compute the car as a socio-cultural technology.  

Figure 11 – Genzken holding her “Hyperbolos” in 1982, and “Windows”. Eichler , Dominic. “This Is Hardcore.” Frieze, 2014.

Social Architecture? 

Perhaps the problem is not so much whether the “social” is computable, but rather that we are trying to objectively rationalise something that is intrinsically social. This is not to say that scientific methods to social architecture are in vain; rather the opposite, that science and its language should act as socioeconomic drivers to changes in architectural production. What is architecture? It can be described as what stands at the intersection of art and science – the art of the chief ‘arkhi-’ and the science of craft ‘tekton’ – but the chance encounter of the two gives birth to more than their bare sum. If architecture is neither art nor science but an emergence of its own faculty, it should be able to argue for itself academically as a discipline, with a language crafted as its own, and to debate itself on its own ground – beyond the commercial realm that touches base with ground constraints and reality of physical manifestation, and also in its unique way of researching and speculating, not all “heads in the clouds”, but in fact revealing pre-existing socioeconomic conditions.  

It is only through understanding ourselves as a discipline that we can begin to really grasp ways of contributing to a social change, beyond endlessly feeding machines with data and hoping it will either validate or invalidate our ready-made and ear-catching hypothesis. As Carpo beautifully put it:  

Reasoning works just fine in plenty of cases. Computational simulation and optimization (today often enacted via even more sophisticated devices, like cellular automata or agent-based systems) are powerful, effective, and perfectly functional tools. Predicated as they are on the inner workings and logic of today’s computation, which they exploit in full, they allow us to expand the ambit of the physical stuff we make in many new and exciting ways. But while computers do not need theories, we do. We should not try to imitate the iterative methods of the computational toolds we use because we can never hope to replicate their speed. Hence the strategy I advocated in this book: each to its trade; let’s keep for us what we do best.” [30] 

References

1 A. Bava, “Computational Tendencies – Architecture – e-Flux.” Computational Tendencies, January. 2020. https://www.e-flux.com/architecture/intelligence/310405/computational-tendencies/.

2 R. Bottazzi, Digital Architecture beyond Computers Fragments of a Cultural History of
Computational Design (London: Bloomsbury Visual Arts, 2020).

3 ASSRU, Algorithmic Social Sciences, http://www.assru.org/index.html. (Accessed December 18, 2021)

4 P. Schumacher, Design of Information Rich Environments, 2012.
https://www.patrikschumacher.com/Texts/Design%20of%20Information%20Rich%20Environments.html.

5 Oxford, “The Home of Language Data” Oxford Languages, https://languages.oup.com/ (Accessed December 18, 2021).

6 Google, “Algorithmic Information Theory – Google Arts & Culture”, Google,
https://artsandculture.google.com/entity/algorithmic-information-theory/m085cq_?hl=en. (Accessed December 18, 2021).

7 Britannica, “Sociology”, Encyclopædia Britannica, inc. https://www.britannica.com/topic/sociology. (Accessed December 18, 2021).

8 Etymonline, “Etymonline – Online Etymology Dictionary”, Etymology dictionary: Definition, meaning and word origins, https://www.etymonline.com/, (Accessed December 18, 2021).

9 J. Culler, Literary Theory: A Very Short Introduction, (Oxford: Oxford University Press, 1997).

10 K. Friston, ”The free-energy principle: a unified brain theory?“ Nature reviews neuroscience, 11 (2),127-138. (2010)

11 J. Covach, “Form in Rock Music: A Primer” (2005), in D. Stein (ed.), Engaging Music: Essays in Music Analysis. (New York: Oxford University Press), 71.

12 Jay-Z. Empire State Of Mind, (2009) Roc Nation, Atlantic

13 J. Derrida, The Ear of the Other: Otobiography, Transference, Translation ; Texts and Discussions with Jacques Derrida. Otobiographies / Jacques Derrida, (Lincoln, Neb.: Univ. of Nebraska Pr., 1985).

15 Kunsthalle Wien, “Kunsthalle Wien #FemaleFool Booklet I’m Isa Genzken the …,” (2014). https://kunsthallewien.at/101/wp-content/uploads/2020/01/booklet_i-m-isa-genzken-the-only-female-fool.pdf?x90478.

16 W. Ong, Orality and Literacy: The Technologizing of the Word, (London: Methuen, 1982)

17 R. Koolhaas, New York délire: Un Manifeste rétroactif Pour Manhattan, (Paris: Chêne, 1978).

18 Kunsthalle Wien, “Kunsthalle Wien #FemaleFool Booklet I’m Isa Genzken the …,” (2014). https://kunsthallewien.at/101/wp-content/uploads/2020/01/booklet_i-m-isa-genzken-the-only-female-fool.pdf?x90478.

19 J. Rasenberger, High Steel: The Daring Men Who Built the World’s Greatest Skyline, 1881 to the Present, (HarperCollins, 2009)

20 Tate, “’L’Enigme D’Isidore Ducasse’, Man Ray, 1920, Remade 1972”, Tate. https://www.tate.org.uk/art/artworks/man-ray-lenigme-disidore-ducasse-t07957, (Accessed December 18, 2021)

21 M. Carpo, ”Big Data and the End of History”. International Journal for Digital Art History, 3: Digital Space and Architecture, 3, 21 (2018)

22 J. Lagerwey, Paradigm Shifts in Early and Modern Chinese Religion a History, (Boston, Leiden: Brill, 2018).

23 Google, “Mass Customization – Google Arts & Culture.” Google, https://artsandculture.google.com/entity/mass-customization/m01k6c4?hl=en (Accessed December 18, 2021).

24 M. Carpo, The Second Digital Turn: Design beyond Intelligence, (Cambridge: MIT, 2017).

25 W.R. Mead, (Winter 1995–1996). “Trains, Planes, and Automobiles: The End of the Postmodern Moment”. World Policy Journal. 12 (4), 13–31

26 U. Loock, “Ellipsoide und Hyperboloide”, in Isa Genzken. Sesam, öffne dich!, exhibition cat. (Whitechapel Gallery, London, and Museum Ludwig, Cologne: Kasper, 2009)

27 S. Baier, “Out of sight”, in Isa Genzken – Works from 1973-1983, Kunstmuseum

28 R. Krotz, H. G. Bock, “Isa Genzken”, in exhibition cat. Documenta 7, Kassel 1982, vol. 1, p. 330-331, vol. 2, p. 128-129

29 A. Farquharson, “What Architecture Isn’t” in Alex Farquharson, Diedrich Diederichsen and Sabine Breitwieser, Isa Genzken (London 2006), 33

30 P. Virilio, Speed and Politics: An Essay on Dromology (New York: Columbia University, 1986).

Suggest a Tag for this Article
Sebastiano Serlio, Livre Extraordinaire de Architecture [...] (Lyon: Jean de Tournes, 1551), plate 18, detail
Sebastiano Serlio, Livre Extraordinaire de Architecture […] (Lyon: Jean de Tournes, 1551), plate 18, detail
Citations, Method, and the Archaeology of Collage *
algorithm, alphabet, architectural language, Citations, Collage, Method, pomo, post modern, Renaissance, shape Grammar
Mario Carpo

m.carpo@ucl.ac.uk
Add to Issue
Read Article: 3650 Words

But let us not have recourse to books for principles which may be found within ourselves. What have we to do with the idle disputes of philosophers concerning virtue and happiness? Let us rather employ that time in being virtuous and happy which others waste in fruitless enquiries after the means: let us rather imitate great examples, than busy ourselves with systems and opinions.  … For this reason, my lovely scholar, changing my precepts into examples, I shall give you no other definitions of virtue than the pictures of virtuous men; nor other rules for writing well, than books which are well written.  

Jean-Jacques Rousseau, Julie ou la Nouvelle Héloïse, Letter XII (William Kenrick transl., 1784)  

Children learn to speak their mother tongues through practice and observation. They don’t need grammar rules. Grammar comes later, when it is taught at school. This shows that we may know a language without knowing its grammar. Grammar is an artificial shortcut to fluency, replacing the lengthy process of learning from life. For a fifteen-year-old high school student struggling to learn German, grammar is indispensable. Yet plenty of native German speakers don’t know declensions by heart and still manage to get their word endings right – in speech as much as in writing.

At a higher level of linguistic practice, literary composition too used to have its own rules – rules that were taught at school. Until the end of the nineteenth century rhetoric was a compulsory subject in most European secondary schools. Rhetoric is the science of discourse. It teaches how to find the arguments of speech, how to arrange them in an orderly manner, and how to dress them with words. Rhetoric teaches how to be clear and persuasive. Seen in this light, rhetoric would seem to be a necessary discipline – indispensable, even. Instead, it no longer features in school and university curricula. France stopped teaching rhetoric in 1885, when French lycées replaced it with the history of classic and modern literature. Nineteenth-century educators seemed to have concluded that, when learning to write, we are better off in the company of literary masterpieces, rather than engaged in the normative study of classical (or modern) rhetoric. A century after Rousseau, Julie-Héloïse’s pedagogical programme quoted above became law.

In times gone by students would have learnt the art of discourse by systematically studying grammar and rhetoric – page after page of rules to be learnt by heart. Today high school students in all European countries are instead obliged to read the masterpieces of their respective national literatures, often ad nauseam. This evidently follows from the assumption that, by reading and re-reading these exemplary works, students will (at some point) learn to write as beautifully as these canonical authors once did. Never mind that nobody knows precisely how and when that almost magic transference, assimilation, and transmutation of talent might occur: grammar has almost completely disappeared from primary school teaching, and rhetoric barely features in higher education – now an intellectual fossil of sorts. Meanwhile, the old art of discourse tacitly lingers on, in business schools, in creative writing and marketing classes. Especially in the latter, the ancient forensic discipline is returned to one of its ancestral functions: that of persuading, even when in the wrong.

For the Humanists of the Quattrocento, the first language to learn was Latin. Not Medieval Latin of course – a corrupt and barbaric but still living language. Renaissance Humanists wanted to speak in the tongue of classical antiquity; they wanted to learn Cicero’s Latin. But Cicero’s Latin is, by definition, a dead language: quite literally so, since it died with Cicero. Cicero also wrote manuals on the art of rhetoric, but the Humanists believed that the best way to learn to write like Cicero was by imitating his way of writing. Well before the Romantics and the Moderns, they found learning from rules unappealing. They preferred to copy the style of Cicero from examples of his work.

The Humanists’ veneration of examples was not limited to languages. Their exemplarism was an épistémè – an intellectual, cultural and social paradigm, deeply inscribed within the spirit of their time. That was their rebellion against the world they grew up in. For centuries the Scholastic tradition had privileged formalism, deductive reasoning, and syllogistic demonstration. The Humanists rejected this “barbarous”, “Gothic” tradition of logic, in favour of their new way of “learning from examples”. The dry and abstract rules of medieval Scholasticism were difficult to handle. Examples, on the other hand, were concrete and tangible. Imitating an example was easier, more pleasurable, and allowed more room for creativity than merely applying rules. This is how, at the dawn of modernity, antiquity was turned from a rule book into an art gallery.

*** *** ***

Like the arts of discourse, the arts of building require schooling. At the height of the Middle Ages, when both Gothic architecture and Scholasticism were at their peak, architectural lore was the preserve of guilds, and its mostly oral transmission was regulated by secretive initiation practices. By contrast, the Humanists pursued a more open strategy – reviving the ancient custom of writing books on building. The first modern treatise, Alberti’s De Re Aedificatoria, deals with the architecture of antiquity, but the structure of Alberti’s discourse was still medieval and Scholastic. Alberti advocates classical architecture as a paragon for all modern building, but Alberti’s antiquity was an abstract model, devoid of any material, visible incarnation. Rather than an atlas of classical buildings, Alberti’s book offers a set of classical design rules – rules for building in the classical way. To put it in more contemporary terms, Alberti formalized classical architecture. Alberti’s rules replace the need to see – let alone imitate – the monuments of classical antiquity. To avoid all misunderstanding, Alberti’s book did not describe any actual ancient monument, either in writing or visually: Alberti’s De Re Aedificatoria originally did not include any illustrations, and Alberti explained that he wanted it that way.

As a commercial venture, Alberti’s De Re Aedificatoria was not a success. Renaissance architects found it easier to skip Alberti’s writings altogether, and go see, touch and learn from the extant magnificence of Roman ruins in person. Moreover, and crucially, as of the early sixteenth century drawings of ancient monuments started to be sold and circulated throughout Europe. Survey drawings in particular, for the first time made available through print, made the laborious ekphrastic and normative mediation of Alberti’s writings all but unnecessary. But models, if beautiful to behold, are not always easy to imitate. Copies will inevitably be more or less successful, depending on the individual talent of each practitioner. By the second or third decade of the sixteenth century imitation itself had become a pedagogic and didactic conundrum.

Not just architectural imitation: writers had the same problem. After all, imitating Cicero is easier said than done. Many rhetoricians in the sixteenth century will strive to transform the practice, skills, and tacit knowledge of literary imitation into a rational, transmissible technique. The modern notion of “method” was born out of sixteenth century rhetoric, but sixteenth century authors were not trying to develop a (scientific) method for making new discoveries; they were trying to develop a (pedagogic) method to better organise and teach what they already knew. Their post-Scholastic, pre-scientific method was essentially a diairetic method – a method of division: all knowledge, they argued, can be partitioned into smaller and smaller units, easier to learn, remember and work with. For sixteenth century scholars, “method” still meant “short cut” – a short cut to knowledge.

Discourse itself can be divided into modular parts: prefaces, arguments, conclusions, formulas and figures, idioms or turns of phrase, sentences, syntagms, words and letters. Sixteenth-century rhetoricians used this divisive technique to invent a new method for literary imitation. On the face of it, Cicero’s style may appear as an ineffable quintessence, but at the end of the day all writing is text, and every text can be broken down into a linear sequence of alphabetical units. Of course, breaking up a text is not a straightforward operation: the parts of speech are held together by syntactic, semantic, and functional relationships. Some of these links can be uncoupled. Others can’t. A text is a heteroclitic, variable cohesion aggregate of parts. Its segments differ in both extension and complexity. Yet even the most sophisticated literary monument can be subdivided into fragments; and once a fragment has been set apart from its compositional context, it can also be reused, reassembled, or recomposed into another text.

In reducing the art of discourse to a citationist technique – by turning ancient texts into a repository of infinitely repeatable citations – sixteenth century rhetoricians invented a new rhetoric. Ancient and modern texts came to be seen as mechanical assemblages of parts. Ancient works could be decomposed into segments, and these segments could then be reassembled to form new works. The smaller the segments, the more fluid or freer the outcome. Ciceronian Latin was an extraordinarily sophisticated and effective instrument of communication, but some modern ideas fundamentally differed from those of Cicero. The citationist method of imitation allowed Renaissance authors to use an old language to express new ideas.

Renaissance architects also needed a rational method for producing modern buildings while imitating classical examples. The greatest structures of antiquity – temples, amphitheatres, thermal baths – were of no use to modernity. Temples, in particular, while representing the pinnacle of classical architecture, had been built to house rituals and represent heathen gods whose worship had long ceased. The entire language of classical architecture had to be adapted for typologies and functions that had no precedents in antiquity. The image of antiquity itself as a building that can be endlessly dismantled and reassembled was a commonplace in the Renaissance. It was also a common practice on many building sites. Architect Sebastiano Serlio would turn this practice into a design theory.

That was no accident. Giulio Camillo, one of the main theorists of the sixteenth century citationist method, had an interest in architecture. He was also a friend of Serlio. The two were supported by the same patrons, and moved in the same circles of Evangelical (and perhaps Nicodemite) inclination. The method of Giulio Camillo’s Neoplatonist rhetoric is well known:

1. Appropriate ancient examples (literary or otherwise) must be selected. The criteria for this selection were a much-disputed matter at the time, and one on which Camillo himself did not dwell.

2. The resulting corpus of integral textual sources must be segmented or divided into parts according to functional or syntactical criteria.

3. This catalogue of dissolved fragments must be sorted, so new users know where to look for the fragments they need.

4. A modern writer (a composer, but also in a sense a compositor: an ideal type-setter) will pick, reassemble and merge, somehow, any number of chosen textual fragments.

Thus new ideas could be expressed through ancient words and phrases – fragments severed from their original context, yet validated by prior use by a recognised “authority”. In Camillo’s view, this compositional technique constituted the inner workings and the secret formula of all processes of imitation. Furthermore, this was a compositional method that could be taught and learnt.

One essential tool in implementing this pedagogical programme was Camillo’s notorious Memory Theatre, a walk-in filing cabinet where all the textual sources (and possibly some of the fragments deriving from them) would have been sorted following Camillo’s own classification system. The whole machine, which included an ingenious information retrieval device, would have been in the shape of an ancient theatre – and it appears that Camillo built at least a wooden model or mock-up of it, in the hope (soon dashed) of selling his precociously cybernetic technology to King Francis I of France.

In a long-lost manuscript (found and published only in 1983) Camillo also explains how the same principles can inform a new method for architectural design. In Camillo’s Neoplatonic hierarchy of ideas, the heavenly logos descends down into reality following seven steps or degrees of ideality. Individuals inhabit the seventh (lowest, sublunar) step; their ascent and crossing of the lunar sky occurs by dint of their separation from the accidents of space and time. In the case of architecture, actual buildings as they exist on earth must be separated from their site to become ideas of the lowest (sixth) grade. This separation of the real from its worldly context results in something similar to what we would today call “building types” – which are buildings in full, except they do not inhabit any given place. These abstract types are then further subdivided into columns and orders (of the five kinds then known: Tuscan, Doric, Ionic, Corinthian, and Composite). The five orders are then broken down into regular geometric volumes, then surfaces, all the way to Euclidian points and lines. On each grade or step, a catalogue of ready-made parts would offer any designer all the components needed to assemble a new building. Thus Camillo’s design method doubles as a shortcut to architectural imitation, and as a universal assembly kit.

A more scholarly trained Neoplatonist philosopher (and a few existed in Camillo’s time) would have objected to some of Camillo’s brutal simplifications, and could have pointed out that his theory had severe epistemic flaws. All the same, Camillo’s architectural method (which its first editor, Lina Bolzoni, dated to around 1530) is almost identical to the plan laid out by Serlio in the introduction to the first instalment of his architectural treatise, published in Venice in 1537. Some of Serlio’s seven grades did not correspond to Camillo’s order: most notably, his atlas of archaeological evidence, the base and foundation of Camillo’s Neoplatonic scaffolding, should have been on the lowest step, but was instead printed as Serlio’s Third Book (likely for commercial reasons). Additionally, one of the seven books in Serlio’s original plan, his revolutionary Sixth Book, on Dwellings for all Grades of Men, was written but never published – at least, not until 1966. Serlio also wrote an additional, Extraordinary Book (literally, a book out of the original order) – a cruel, sombre joke disguised as a book, which Serlio bequeathed to posterity shortly before dying, poor and dejected in his self-imposed French exile.

Regardless of some factual discrepancies, Serlio’s compositional method is ostensibly the same as Camillo’s. Architecture’s exemplary models are selected, and then fragmented. These fragments are sorted and classified at different levels or grades of dissolution. Instructions for their reassembly are then provided, together with examples of successful new compositions. The pivot of the whole system was the book on the five architectural orders, which Serlio published first (albeit titled Fourth Book to comply with the general plan): a catalogue of stand-alone constructive parts (columns, capitals, bases, entablatures and mouldings), destined for identical reproduction in print, in scaled drawings, and in buildings of any type. In Serlio’s method, this was the main offspring of architectural “dissolution” (or disassembling), and the basic ingredient of architectural design, i.e. re-composition. Pagan idols had to be broken down; only their fragments could be used, purified ingredients in the building of a new Christian architecture.

All the way, Serlio was aware of, and attuned to, the purpose and limits of his architectural method. Serlio turned architectural design into an assemblage of ready-made modular components. These were not actual spolia, but compositional design units, part to a universal combinatory grammar and destined for identical replication. Giulio Camillo’s rhetoric reduced the imitation of Cicero’s style, hence all literary composition, to a cut-and-paste method of collage and citation. Serlio’s treatise did the same for architecture. His theory of the orders was the keystone of the entire process. Serlio couldn’t standardise the building site (that would have made no sense in the sixteenth century), but he could standardise architectural drawings and design.

Serlio knew full well that his simplified, almost mechanical approach to design would entail a decline in the general quality of architecture. Many critics across the centuries have indeed frowned at the models and projects shown in his Seven Books. Serlio’s designs have often been seen as repetitive, banal, ungainly or chunky; lacking in inspiration and genius. But Serlio did not write for geniuses. His treatise was a pedagogical work, not an architectural one. As Serlio tirelessly reminds the reader, his method is tailored to “every mediocre”: to the “mediocre architect” – the average, middling designer. Today we might say that Serlio’s treatise aimed at creating an intermediate class of building professionals. Michelangelo and Raphael had no need for “a brief and easy method” that turned architectural invention into cut-and-paste, collage and citation.

Knowledge can be taught, not genius. Serlio’s pedagogical structure and design method were parts of an overarching ideological project. Serlio’s method promises uniform and predictable architectural standards. These are perhaps banal, or monotonous, but that’s the price one pays to make “architecture easy for everyone”. And it is a price Serlio was willing to pay. Serlio’s concern was the average quality of building, not the artistic value of a few outstanding monuments. This was a most unusual choice for an artist of the Italian Renaissance – an iconoclastic, almost revolutionary stance. Serlio’s worldview was not one in which the misery of the many was contrasted by the magnificence of a few. Serlio pursued the uniform, slightly boring repetitiveness of a productive, “mediocre” multitude. This was an ideological project, but also a social project, ripened in the cultural context of the early protestant Reformation. It is a position that evokes and preludes well-known categories of modernity.

Sebastiano Serlio, Livre Extraordinaire de Architecture [...] (Lyon: Jean de Tournes, 1551), plate 18.
Sebastiano Serlio, Livre Extraordinaire de Architecture […] (Lyon: Jean de Tournes, 1551), plate 18.

* Footnote to this translation

This is a translation of the introduction to my book Metodo e Ordini nella Teoria Architettonica dei Primi Moderni (Geneva: Droz, Travaux d’Humanisme et Renaissance, 1993), edited, abridged, and adapted for clarity, but not updated. That book in turn derived from my PhD dissertation, supervised by Joseph Rykwert, researched and written between 1984 and 1989, and defended in the spring of 1990. Heavily influenced by Françoise Choay’s La Règle et le Modèle and by works of literary criticism by Terence Cave (The Cornucopian Text), Antoine Compagnon (La seconde main ou le travail de la citation), and Marc Fumaroli (L’âge de l’éloquence), all published between 1979 and 1980, my enquiry on the use of visual citations in Renaissance architectural design was evidently in the spirit of the time: post-modern architects in the 80s were passionate about citations (or the recycling of precedent, otherwise known as reference, allusion, collage and cut-and-paste); they were equally devoted to architectural history, and particularly to the history of Renaissance classicism. My aim then was to bridge the gap between those two sources of PoMo inspiration, showing that Renaissance architecture was itself, quintessentially, citationist. How could it have been otherwise, since the main purpose of Renaissance architects was to revive, literally, the buildings of classical antiquity – piece by piece? Thanks to the first studies of Lina Bolzoni on the sulphurous Renaissance philosopher and magician Giulio Camillo, and to my then girlfriend, who was studying Renaissance Neoplatonism (and is today a known specialist of that arcane science), I soon found evidence of an extraordinary link – biographical, ideological, and theoretical – between Giulio Camillo and Sebastiano Serlio, and I wrote a PhD dissertation to explain the transference of the citationist method from Bembo’s Prose to Camillo’s Theatre to Serlio’s Seven Books – and ultimately to Serlio’s architecture.

Unfortunately, in the process, I also found out that the citationist method in the 16th century was a tool and vector of modernity. It was a mechanical method, made to measure for the new technology of printing; it was also in many ways a harbinger of the scientific revolution that would soon follow. Besides, the citationist method was more frequently adopted by Evangelical and Protestant thinkers (particularly Calvinist), and it was condemned by the Counter-Reformation. None of this would have pleased the PoMo architects and theoreticians who were then my main interlocutors.

Fortunately for me, they never found out. When my book was published, in 1993, the tide of PoMo citationism was already receding. Investigating the sources of citationism was no longer an urgent matter for architects and designers. My book was published in Italian, in an austere collection of Renaissance studies – few architects would have known about it, let alone read it. It received some brutally disparaging reviews, as due, by some of Tafuri’s acolytes, because they thought, without reading my book, or misreading it, that I was bringing water to the PoMo mill. I wasn’t. But at that point that was irrelevant. We had all already moved on.

I was pleasantly surprised when, a few years ago, Jack Self commissioned this translation for publication in Real Review (the translation, by Fabrizio Ballabio, was soon thereafter partially republished in Scroope, the journal of the Cambridge School of Architecture, at the request of Yasmina Chami and Savia Palate); and I was of course more than happy when my colleague Alessandro Bava asked me to review it for publication in the B-Pro journal of Bartlett School of Architecture. As we all know, collage and citation are becoming trendy again in some architectural circles – for reasons quite different from those of the late structuralists and early PoMos that were my mentors when I was a student. I have somewhat mixed feelings about the current, post-digital revival of collaging, but I would be happy to restart a discussion we briefly adjourned a generation ago.

Mario Carpo (March 2022)

Publication history:

Metodo e Ordini nella Teoria Architettonica dei Primi Moderni. Alberti, Raffaello, Serlio e Camillo (Geneva: Droz, 1993). 226 pages. Travaux d’Humanisme et Renaissance, 271

“Citations, Method, and the Archaeology of Collage”. Real Review, 7 (2018): 22-30, transl. by Fabrizio Ballabio and by the author; partly republished in Scroope, Cambridge Architectural Journal, 28 (2019): 112-119

Suggest a Tag for this Article
Figure 5 - Exhibition 'Architettura Parametrica e di Ricerca Matematica e Operativa nell’Urbanistica', Milano, XII Triennale, 1960. View of the exhibition space (Archivio Moretti Magnifico)
Figure 5 – Exhibition ‘Architettura Parametrica e di Ricerca Matematica e Operativa nell’Urbanistica’, Milano, XII Triennale, 1960. View of the exhibition space (Archivio Moretti Magnifico)
Luigi Moretti: The Unity of Algorithmic Language
26/04/2022
algorithmic fitness, Algorithmic Language, critique to empiricism, generative algorithms, Luigi Moretti, parameters, probabilistic outcomes, search space
Marco Vanucci, Marco Vanucci

marco@opensystems-a.com
Add to Issue
Read Article: 9465 Words

“The new art must be based upon science, in particular, upon mathematics, as the most exact, logical, and graphically constructive of the sciences.” Albert Durer

In the newfound spirit that emerged at the end of the Second World War, Rome became the epicentre of a cultural renaissance. Beside the swinging high life impeccably captured by Fellini in La Dolce Vita, the Eternal City shone as a cultural hub, not just attracting actors and film makers to Cinecittà but, rather, gathering artists, scientists, philosophers, architects and engineers.

The Valadieresque Piazza del Popolo was one of the epicentres of the city’s cultural life. At number 18, next to Antonio Canova’s studio and in front of Caffe Rosati, home to the literati, were the headquarters of Civiltà delle Macchine, a magazine directed by Leonardo Sinisgalli and house organ of Finmeccanica (an Italian company specialising in aerospace, defence and security), promoting the new technological and scientific zeitgeist. Nearby, in via Sistina, L’Obelisco gallery hosted Giorgio Morandi and Alberto Burri’s shows as well as the first exhibitions in Italy of René Magritte and Robert Rauschenberg. The second wave of La Scuola Romana (or Scuola di via Cavour) was also in full swing: the Caffè Aragno, on via del Corso, and the art gallery Cometa hosted discussions and exhibitions that challenged classicism in favour of new art forms, such as expressionism. The Italian “economic miracle” was thriving under the pressure of industrial development and the prosperous growth of the real estate market. The development of new infrastructure went hand-in-hand with the expansions of the cities through the construction of entire new neighbourhoods for the affluent working class. The deployment of a new apartment block typology, la palazzina [1], stretched far and wide in many parts of Rome and, beyond, across the country. Many notable examples were designed by the protagonists of a new generation of architects and engineers who, while promoting the ideas of modernism, were keen to establish a link between the new city and its architectural history. In the work of Ugo e Amedeo Luccichenti, Vincenzo Monaco, Pier Luigi Nervi, Mario Ridolfi and Luigi Moretti, the formal principals of Mannerism and Baroque evolved using reinforced concrete. They experimented with a new formal approach and often expressed new structural possibilities: the autonomous articulation of the façade, its depth, the expressive qualities of exposed concrete, as well as the daring structural solutions, were some of the characteristics of this new repertoire.

It is within this context, characterised by the productive tension between the innovative language of the modern avant-garde and the tradition of humanism, that Luigi Moretti became a central figure in the cultural landscape of the Italian post-war period, certainly one of its brightest interpreters.

Besides its lively cultural scene, Rome remained a place filled with traditional values, rituals, and multiple contradictions. The Italian novelist and Federico Fellini’s long-time screenwriter, Ennio Flaiano, described Italy as “the country where the shortest line between two points is an arabesque”. The paradox and inconsistencies of the Italian bureaucracy proved daunting to foreigners and newcomers, however, they were daily routine to the Roman citizens. Moretti navigated this intricate context with pleasure and ease. Many traits of his persona reflected this contradictory environment: he was physically imposing but elegant and refined; eloquent, charismatic and capable of attracting strong feeling of love and hatred; extroverted yet reserved, egocentric but generous with youngsters; an artist with a passion for science, coherent and multifaceted; a keen student of human nature with a strong temperament, which made discussions with him difficult and intimidating.

Moretti, however, had an eccentric side to his character. He rode with his chauffeur through the narrow street of Rome in a black and white convertible Chevrolet with bright red upholstery. One of his collaborators recalled that “he would enter the Roman trattoria like a Renaissance prince, … give precise instructions to waiter and chef…[and] unilaterally decide the menu for all”. [2]

Luigi Walter Moretti was born in via Napoleone III, on the Esquiline Hill, in the same apartment where he lived almost his entire life. He was the son of Luigi Rolland (1852-1921), an architect and engineer of Belgian origins. Having graduated in 1929 from the Royal School of Architecture in Rome, while assisting Professor Vincenzo Fasolo at the chair of restoration, Moretti won a scholarship for Roman Studies. He then worked with archaeologist and art historian Corrado Ricci at the Trajan’s Market, not far from via Panisperna, in Rione Monti, where he later established his first studio. Born one year before the publication of Marinetti’s Manifesto of Futurism, [3] Moretti absorbed the futurists’ conviction in the “magnificent and progressive fate” of technological innovation and translated it into his own theory and practice. His intellectual approach reflected the profile of a nineteenth-century polymath, a mixture of positivistic idealism and passion for the opportunities offered by the new technologies. He paired creativity with methodological rigour; he rooted his knowledge in the humanistic tradition, drawings inspiration from the late Renaissance and the Baroque, while cultivating a sensibility for mathematics and science. [4] For him, mathematics was the field of “purest contemplations” and “applicative wonders”, [5] so art was “to make humans rise to contemplation, to a sort of vivid bewilderment”. [6]

A New Humanism

Unlike the Futurists, who saw history as too heavy a burden to carry, Moretti considered the history of art and architecture as primary sustenance. He understood history as a continuum and Modernism as part of this long narrative. Luigi Moretti thought of himself as the epigon of that ‘mathematical humanism’ that flourished between Urbino and Florence in the quattrocento [7] Seminal figures such as Luca Pacioli and Piero Della Francesca were from San Sepolcro, half way between the Medici court and the Montefeltro, and each authored treatises on mathematics. Pacioli studied mathematical and artistic proportion, the golden ratio and its application to architecture. He taught math to Leonardo da Vinci who, in turn, drew the illustrations of the regular solids in De Divina Proportione [8] . History has it that Pacioli also introduced Albrecht Dürer to the study of the human body which, in the 20th century, inspired D’Arcy

Thompson’ series on the morphogenesis of forms. On the other hand, Piero Della Francesca was trained in mathematics and wrote three treatises [9], covering subjects such as arithmetic, algebra, geometry, solid geometry and perspective. As a young scholar, Piero visited Florence to study Masaccio’s crucifixion in Santa Maria Novella, where Brunelleschi drew the perspective. This collaboration possibly inspired his work for the Madonna di Senigallia where he sought the collaboration of Bramante to help with the perspective. It is not a chance if Piero Della Francesca’s Flagellation of Christ is considered the first ‘scientific’ perspective ever realised. It was still in Urbino where Francesco di Giorgio Martini mastered the art and science of fortifications, designed following the ballistic trajectories of the new firearms technology [10]. In Rome, this tradition spanned from Apollodorus of Damascus to Michelangelo, all the way to Borromini’s divine geometry where the influence of mathematicians such as Kepler and Leibniz cannot be confirmed but it’s likely have played a role. Moretti considered himself to be the incarnation of the baroque spirit. His passion for and study of the Baroque was deeply rooted in the cultural climate in Rome following the First World War, which was the result of a broader re-discovery of baroque architecture, especially by German and Austrian historians [11]. He also had the chance to study with Fasolo and Giovannoni, who were renown scholars of the Baroque. Moretti considered Michelangelo Buonarroti as his spiritual father. Particularly interesting are Moretti’s studies of one of his less known but most emblematic works: the Sforza chapel in Santa Maria Maggiore, which, according to Moretti, was configured as “the fullest expression of [his] architectural genius”, a “living archetype of architecture [in which] the constructive feeling is one with the construction [and where] the material, in every aspect of its nature, is … folded, transformed into a work of art, since … it is ‘felt’ by the architect as something of his own blood”.[12]

In 1964, at the 25th edition of the Venice Film Festival, while Deserto Rosso [13] won the Golden Lion as best movie, the Art Film section (boasting a jury presided over by Giulio Carlo Argan and including Gio Ponti) awarded the 50-minute long Michelangelo [14], directed by Charles Conrad and Luigi Moretti. In the movie, the work of Michelangelo is analysed through a series of unusual shots and points of view on his art and buildings. Moretti explained that “the first purpose [of] it is the right figurative reading of the work, above all to shake from the eyes those thin, abstract and now worn images of Michelangelo’s masterpieces; images [which are] already false in themselves, since photographs [taken] with wide angle [lens … present] images that are almost always impossible in real life. The second purpose … is that of reading according to a true order that illuminates the compositional spirit of the works … [This] is of course the most arduous, and the commentary of the film [is to] try to facilitate it”.[15] In the documentary, Moretti made use of dramatic lighting, in the manner of Caravaggio’s paintings, to accentuate the theatrical atmosphere, and avoided symmetric shots to render the work from an unusual angle. Particularly interesting is his reading of the Cappella Medicea in Florence, where he placed the camera on the ceiling, offering the opportunity to view the compressed interior spaces. Here, the director seems to be influenced by his professor Vincenzo Fasolo, who used to work through axonometric sectional views to unveil the tectonic character and planimetric sequences of space. A similar critical approach would be used by Bruno Zevi, a few years later, to produce the models and the drawings that punctuated Michelangelo’s exhibition at Palazzo delle Esposizioni [16].

Figure 1 - Study on visibility - Studies on visibility for the football stadium (Archivio Moretti Magnifico)
Figure 1 – Study on visibility – Studies on visibility for the football stadium (Archivio Moretti Magnifico)
Figure 2 - Study on visibility - Studies on visibility for the tennis stadium (Archivio Moretti Magnifico)
Figure 2 – Study on visibility – Studies on visibility for the tennis stadium (Archivio Moretti Magnifico)

The New Century of Science

Moretti’s work and approach can be understood by examining the cultural context within which he operated and where a new alliance between art and science was being defined.

At the turn of the century, the proliferation of new scientific theories challenged the axioms of modern physics and introduced ideas of complexity and chaos. Babbage’s first programmable calculator, Ada Lovelace’s first computer programs , and Boole’s binary code, together with the dissemination of Hollerith’s punched card tabulating machine, marked the beginning of the new era of mechanized binary code and semiautomatic data processing systems. In 1936, Alan Turing published On Computable Numbers,[17] describing what will become the Turing machine, and, in turn, his focus on neurology and physiology will eventually pave the way for artificial intelligence. On the back of this experimentation with the first computational machines, multiple applications became possible: fractals, theory of complexity, chaos theory, thermodynamics, neural networks, generative algorithms, etc.

Moretti was also aware of the evolutionary theory of Charles Darwin and, on the pages of the USL Paris Review [18], among a collage of images of Antonelli, Guarini and Botticelli, he laid out images of the morphological evolution of biological specimens taken from D’Arcy Thompson’s On Growth and Form.[19] Moretti’s fascination for biology and natural systems supported his ideas that form can be mathematically described and computed, which became a founding principle in his further search for a new aesthetic in architecture and the arts. These scientific breakthroughs deeply influenced Moretti, who was searching for a more objective approach to the problem of architecture and city planning in the context of the post-war reconstruction.

In 1951, in the pages of Civiltà delle Macchine, Sinisgalli synthesises the new spirit [20]:

“Today, science comes to draw the skeleton of a crystal and to identify the weak points of a beam … These surveys beyond the visible, these searches for comparative phenomena in tools and materials, they allowed us to clarify the meaning of certain provisions which only seemed owned [by] the spirit, and are instead virtues of matter. Art must retain control of the truth, and the truth of our times is of a subtle quality, it is a truth that is of an elusive nature, probable more than certain, a truth “on the edge” which borders on the ultimate reasons … Science and Poetry cannot walk on divergent roads. Poets must not have [a] suspicion of contamination. Lucretius, Dante and Goethe drew abundantly [on] the scientific and philosophical culture of their times without clouding their vein. Piero della Francesca, Leonardo and Dürer, Cardano and della Porta and Galilei always … benefited from a very fruitful symbiosis between logic and fantasy.”

Moretti shared with the futurists his political views, which were aligned with the fascist ideology. At the end of his university career, in 1932, he met Renato Ricci, then the president of the Opera Nazionale Balilla [21] (ONB), who appointed him ONB’s technical director, succeeding architect Enrico Del Debbio. In this role, Moretti designed several youth centres in Piacenza, Rome (Trastevere), Trecate, and Urbino. In 1937, he took over the design and masterplan for Foro Mussolini (now renamed Foro Italico), where he created one of his masterpieces, Casa della Armi (1933), a rationalist structure subverted by the elegant use of curved lines and the masterful control of natural light. In 1938, Moretti participated in the design of the EUR (Esposizione Universale Romana), a planned (but never completed) development in the Southern part of the city, intended to host Rome’s world fair.

In 1942, Moretti disappeared from public life. Once he reappeared, he was briefly imprisoned in 1945 for his collaboration with the regime. In the prison of San Vittore, in Milan, he met Alfonso Fossataro, an entrepreneur and builder with whom he partnered to build several developments, right after the war. Fossataro and Moretti established the developing company Cofimprese, under which Moretti worked on a series of hotel buildings, and realised the Corso Italia complex in in Milan. The il Girasole house , in the Parioli neighbourhood in Rome, belongs to this period (1949) and is considered an early example of postmodern architecture. [22] The Roman palazzina captured the attention of Robert Venturi, who included it in Complexity and Contradictions as an example of ambiguous architecture, halfway between tradition and innovation. In turn, years later, the Swiss architectural theorist Stanislaus von Moos argued that the broken pediment of Vanna Venturi House is a clear reference to Moretti’s project. [23] In the same period, Moretti designed some villas along the Tirrenic coastline: the most famous of which, La Saracena and the nearby La Califfa, are fine examples of mid-century modernism.

During those years, Moretti entertained a relationship with the Roman aristocracy, the cultural elite, and the Vatican. Studio Moretti was in Palazzo Colonna, in Piazza Santi Apostoli, a regal palace in the heart of Rome which housed the famous Galleria Colonna. Prince Colonna occupied the most important secular position in the Vatican, and he constantly received important visitors: from monarchs to cardinals to prime ministers. Moretti’s office overlooked the main cortile of the palace, so that he and his staff (mostly architects and geometri) would enjoy a daily parade of celebrities and authorities, some of who would become clients.

Figure 3 - Architettura parametrica 1960. Football stadium: Diagrams of the curves identifying optimal lines of sight (Archivio Moretti Magnifico)
Figure 3 – Architettura parametrica 1960. Football stadium: Diagrams of the curves identifying optimal lines of sight (Archivio Moretti Magnifico)
Figure 4 - Architettura parametrica 1960. Cinema hall: Diagrams of the curves identifying optimal lines of sight (Archivio Moretti Magnifico)
Figure 4 – Architettura parametrica 1960. Cinema hall: Diagrams of the curves identifying optimal lines of sight (Archivio Moretti Magnifico)

Spazio

The post-war period was the golden age for Moretti: his architectural production blossomed in the context of a striving economy which propelled real estate developments across the country. This is also the period of his intellectual maturity, where Moretti developed his sharpest and most relevant reflections on architectural theory.

Moretti’s reputation with the Roman intelligentsia was compromised by his right-wing political views. Bruno Zevi was probably the one who best understood his talent, but he was also his harshest critic. The world of architecture in Rome was dominated by these two figures, so distant and yet so very close. On the one hand, Zevi:

a Jew and a socialist, exiled during the war by Mussolini; an academic historian, an acute scholar and supporter of the Modern Movement with a predilection for Frank Lloyd Wright and Alvar Aalto. On the other hand, Moretti: a conservative catholic, a supporter of the Fascist regime and an active practitioner banned from academia. They each edited an architectural journal which they used as a means to trumpet their architectural ideas. Zevi was, at one time, Moretti’s best interlocutor and strongest enemy. Despite their rivalry, their relationship could be, at times, relaxed and even civil. What is certain is that they probably shared more than they were ready to publicly admit: Zevi secretly hoped that Moretti would join the Association for Organic Architecture (APAO), a movement founded in 1945 by Zevi himself, Luigi Piccinato, Mario Ridolfi, Pier Luigi Nervi and others, aiming at creating a new school of thought, in open opposition to the reactionary model of the Faculty of Architecture of Rome. Legend has it that Zevi tried to convince Moretti to join APAO, promising to make him the greatest Italian living architect. Moretti refused and was for many years condemned to oblivion by the cultural elite. Despite the antagonism of his many detractors, in 1950, Moretti founded the magazine Spazio, [24] with a clear mission to find connections between different forms of art: from painting to architecture, from sculpture to film and theatre. Spazio burned bright in the Roman intellectual scene and, despite the stigma surrounding Moretti, became a beacon for the visual culture of the time, an elegant cultural project that nobody could dare ignore.

Spazio represents one of the most important moments in Luigi Moretti’s theoretical output. Although the magazine only published seven issues (ceasing publications in 1953), his writings published in the magazine represent his most relevant critical framework and constitute the heart of his theoretical production and cultural legacy.

Moretti was editor, editorial director and writer of most of the articles in the magazine. The opening editorial of the first issue of the magazine is titled “Eclettismo e unità di linguaggio” [25] (eclecticism and unity of language) and can be considered Moretti’s programmatic manifesto. The “Unity of Language” was not intended as a fusion of different artistic languages but rather their consonance: Moretti was aware of the differences between artistic languages, and he knew that, despite some emerging points of contact, they remained separate due to their “algorithmic and close” nature. He used the term algorithmic to describe the tendency of different systems to form the general structure of a building or piece of art. The way, for instance, a particular building deals with the modulation of light, the organisation of space and its bearing was considered by Moretti the algorithmic DNA of that structure. In other words, he conceived of architecture as a “reality of pure interrelations”.[26] Moretti believed that the algorithmic nature of the various artistic languages could finally converge and speak in unison.

“There are some periods of civilization that take shape and character from the splendour of a single language; others, very rare, in which the various expressive languages find harmony (…) and together they reach a dense maturity; they are the happy times of Pericles or of the early Renaissance or of the extraordinary seventeenth century. A unitary language is born, formal process of sorting and classification of the infinite parameters of reality and their relationships. Space thus becomes unitary, resolvable, and expressible in every point, and [a] mirror of a new balanced unity of mankind”. [27] [28]

Then in “Genesi di Forme dalla Figura Umana”,[29] in Spazio’s second issue, Moretti described the role of the human figure in the history of art. While these first two articles for Spazio were concerned with general topics, from the third issue onwards he started to explore more specific aspects that would unveil his operational approach to architecture. In “Forme Astratte nella Scultura Barocca”,[30] Moretti discusses how the non-figurative elements of baroque sculptures present a formal richness that could be subtracted from the composition and yet retain their autonomous aesthetic value as abstract forms. Analysing the Baroque sculptures, he noted that “they reveal some areas of their plastic application resolved in purely formal terms, far from any pre-eminent reference to an objective reality, so that it does not seem arbitrary to know that they belong to the abstract formal world”. A case in point is the sculptural palimpsest accompanying the four figures in Bernini’s Fontana dei Fiumi in Piazza Navona, where the landscape surrounding the human figure retains an autonomous aesthetic value.

The contemporaneity of historical art forms and the relevance of history in the world of today was often questioned and studied by Luigi Moretti. In “Trasfigurazioni di strutture murarie”[31] and “Valori della modanatura”[32] he presented a “close reading” of architectural elements: in the first article he tackles the figurative abstraction of mouldings in Romanic architecture, which he considered to be the most abstract in their pictorial simplicity, and yet very concrete in their constructive logic. Moretti juxtaposed on the same page the images of the Duomo di Pisa and Mondrian’s paintings. Signs, traces, geometric textures used in the pictorial compositions become, therefore, precious matrices to compose architectural plans, sections, and elevations. In the second article, Moretti questioned how cornices and profilescould be considered, rather than decorative elements, as pure form, as the only non-figurative elements of architecture that determine its plasticity and volumetric articulation. In “Discontinuità dello Spazio in Caravaggio”[33] and “Spazi-Luce ell’Architettura Religiosa” he continued to explore the role of light in the dynamic articulation of space. He argued that Caravaggio’s figures are always portrayed from the side, never frontal nor symmetrical, deconstructing mass and space through the interplay of light and shadows, with dynamic results. Here, Moretti made a subtle reference to his project for Corso Italia in Milan where he grafted a cantilevering mass protruding sideways from the urban street front.

Perhaps it is with “Strutture e Sequenze di Spazi”[34] that Moretti produced one of the most relevant critical studies for the culture of his time. In it, Moretti delved into the problem of reading and describing space. If the focus in considering Caravaggio was on perceptive glimpses of space, here the aim was to precisely investigate the relationship between the parts and the whole by studying the sequence of rooms articulated through the compressions and dilations of space. He systematically studies and analyses these aspects through a series of

historical examples: Villa Adriana, Guarino Guarini’s church of San Filippo Neri in Casale Monferrato, Laurana’s Palazzo Ducale in Urbino, and many others. For each of these projects, Moretti produced a series of models where the interior space is represented as a volumetric extrusion. With these, he developed an autonomous spatial reading of architecture not dissimilar to what Eisenman developed in the 1960s and 1970s, with the study of forms as pure architectural syntax. Alongside the models are a series of drawings and diagrams describing the density of the different spaces. Here, the form, the structure and the space itself are represented as a dynamic tension between the immaterial nature of space and its material representation.

It is, however, in “Struttura come Forma”[35] that Moretti elaborated the relationship between structure and form (critiquing the approach that prioritises form over structure) and, for the first time, talked about parametric architecture. Starting from the Vitruvian triad (stability, utility, beauty), Moretti argued that, historically, architecture oscillated between prioritising structure (Brunelleschi, Gothic and Roman architecture) or form (Baroque, Renaissance and 19th Century architecture). He then reflected on the direction function>form, pursed by the Rationalists and the Bauhaus. He considered the “function” as parameters determining the space and its concatenation. These parameters are either very limited, so that space can be easily deduced with scientific rigour, leading to the realm of pure technique (an extreme case of what he called parametric architecture); or these parameters are multiple and not clearly definable, so that the function is necessarily approximate, and only further articulation of the structure can define it more precisely. Here we return to the structure>form approach, where structure is, once again, understood as a complex set of relationships. The text is accompanied by an illustration by a young architect, Guido Figus, who worked on an iterative series of roof structures articulated through folded plates resembling origami. Figus’ drawings are fascinating: rather than proposing an optimum solution, they are exploring a series of possible (parametric) permutations for the structure.

Figure 5 - Exhibition 'Architettura Parametrica e di Ricerca Matematica e Operativa nell’Urbanistica', Milano, XII Triennale, 1960. View of the exhibition space (Archivio Moretti Magnifico)
Figure 5 – Exhibition ‘Architettura Parametrica e di Ricerca Matematica e Operativa nell’Urbanistica’, Milano, XII Triennale, 1960. View of the exhibition space (Archivio Moretti Magnifico)
Figure 6 - Exhibition 'Architettura Parametrica e di Ricerca Matematica e Operativa nell’Urbanistica', Milano, XII Triennale, 1960. View of the exhibition space (Archivio Moretti Magnifico)
Figure 6 – Exhibition ‘Architettura Parametrica e di Ricerca Matematica e Operativa nell’Urbanistica’, Milano, XII Triennale, 1960. View of the exhibition space (Archivio Moretti Magnifico)

An Other Art

The movement initiated with Spazio continued after the magazine ceased publication. On June 26 1954, in via Cadone, Rome, Galleria Spazio opened its doors with its first exhibition titled Caratteri della Pittura d’Oggi (Characters of Today’s Painting). The gallery was established through a collaboration between Luigi Moretti and the French art critic Michel Tapié de Celeyran. Jazz musician, curator, art critic and all-round cultural agitator, Tapié entertained close relationships with art galleries across Europe and North America that allowed him to promote and showcase his roaster of artists. He was also the author of Un Art Autre[36], a compendium about a “new art” of signs and matter, where he promoted and gave wide currency to the French style of abstract painting popular in the 1940s and 1950s called Tachisme. This movement was developed as a reaction to Cubism and was characterised by informality and an absence of premeditated structure, conception or approach (sans cérémonie).

The turning point in Tapié’s career was his friendship with artist George Mathieu. This would soon lead to his meeting with Moretti, through the Roman artist Giuseppe Capogrossi, whose large canvases filled with cryptic glyphs and dynamic forms were disseminated across Moretti’s studio and acted as an inspiration to his architecture. [37]

Moretti was seduced by Tapié; he comprehended his great potential and, with him, he seized the opportunity to promote contemporary art, pursuing the unity of languages and his eclectic vision. Under Moretti’s directorship, the art critic became “artistic consultant” of Spazio gallery. For the first exhibition at the gallery, among the large group of selected artists there were some on the brink of becoming internationally acclaimed: Pollock, Francis and Tobey from the States; Capogrossi and Dova from Italy; Appel and Jorn who, with Wols, formed the CoBrA Group; and Mathieu and Riopelle from France. In the catalogue of the exhibition Moretti wrote:[38] “The intensity, the splendour, the explosion of signs given to the surfaces, the brightness and power of relations, the pure relations these signs compose, are its justification”. He also wrote of “The dramatic beauty, the desperate egoism of these adventurous facts that today occur in art”.

Here, Moretti claimed that painting was of importance only to itself, “only tied to the personal algorithm, to the personality of the artist”. The joint venture between Moretti and Tapié, together with artists such as Mathieu and Capogrossi, represented a clear attempt to find new aesthetic and philosophical ways to make art and science converge.

In 1954, in the pages of the US Lines Paris Review,[39] Tapié claims:

It is time to reconsider the notion of rhythm, no longer by way of the only possible system of whole numbers, but rather by way of real and hypercomplex numbers; the notion of structure, no longer bound irrevocably to the ruler and compass, but to the richer and more general notions of continuity and contingency of present topology, within which classical geometry is now only an extremely specialised little chapter; the notion of content, no longer as a more or less theatrical subject-pretext, but as complying with the norms of scientific psychoanalysis; the notion of space and composition, no longer tied to a static formalistic logic and to an “equilibrium” of the same order, but rather to Galois’ Theory of Groups, to Cantor’s Theory of Wholes, to the present metalogic and to Lupasco’s dynamic logic of the contradictory.

Moretti and Tapié would often wander through the streets of Rome searching for artists and “new voices”. Among them was artist Carla Accardi who, years later, recalled visiting villa Saracena in Santa Marinella with Moretti, Tapié and the American artist Claire Falkenstein who was commissioned to design the villa’s gate.

The Roman architect and the French critic shared a common vision and a commitment to evolve the artistic language. After Spazio, they continued to collaborate for many years, far beyond the closure of the gallery, each of them following their artistic language, but sharing a precise vision: the critic called it Morfologie Autre, while the architect refers to Strutture di Insiemi, a term that Moretti borrowed from the study of Galois’ theory of groups[40]. In 1960, they co-founded the International Centre of Aesthetic Research in Turin, Italy, a facility for the study and exhibition of art, as well as for the publication and dissemination of critical, investigative, or theoretical works on art.

In 1965, they co-authored the book Le Baroque Generalisé: Manifeste du Baroque Ensembliste[41], a beautiful and rare publication where the language of the Baroque is articulated through mathematical formulas. This book synthesises Moretti’s fascination for a more scientific approach to architecture with his love for art, the Baroque and the unity of language.

However, Moretti continued to foster collaboration and intellectual exchange. One such association was with French poet Pierre Pascal, son of chemist Paul Pascal, anti-Gaullist and collaborator with the Vichy government, sentenced in absentia to life imprisonment. Pascal left France in 1944 and took refuge in Italy, where Mussolini initially offered him hospitality at the Vittoriale on Lake Garda before he later moved to Rome. There, he found accommodation at Palazzo Caetani, which became the seat of the Éditions du Cœur Fidèle, a publishing company that Pascal co-founded with Moretti. The Cœur Fidèle would publish a forest of hendecasyllabic and alexandrine verse, and rhythmic prose; from the Persian quatrains of Omar Khayyam to Le Corbeau by Poe (deciphered in his arithmetic, geometric and gematric keys), from the Livre de Job to the Apocalypse of St. John[42]. The last is certainly the most significant: it is an interpretation in French Alexandrine, with sixteen prints of Albrecht Dürer’s Apocalypsis cum figuris[43] taken from the original woodcuts used for the prints of 1498 and 1511. The book is of exquisite quality and it represents the apex of Moretti’s erudition which borders into exoterism, a testament to the belief that his intellectual work was rooted in the line drawn by the great masters of the past. Ricerca Operativa Moretti’s passion for science and mathematics led to a friendship with the engineer and mathematician Bruni De Finetti. They may have first met in Via Panisperna, in Rome, where Moretti, as a young graduate from the school of architecture, opened his studio and where De Finetti, enfant prodige and graduate in applied math from the University of Milan, attended the seminars at the Institute of Statistics. At the time, Enrico Fermi was there leading the ‘Panisperna boys’: Edoardo Amaldi, Ettore Majorana, Bruno Pontecorvo, Franco Rasetti, Emilio Segre [44], a group of bright and young scientists who opened the door to nuclear reaction and, later, to the atomic bomb.

Before collaborating with Moretti, De Finetti had been involved in studies on the economic viability of construction. In the magazine La Città,[45] the architect Giuseppe De Finetti (Bruno’s cousin), invites him to develop a mathematical approach where, thanks to a series of formulas and by establishing a relationship between land value, cost of construction and rental value, they calculated the optimum composition of the building. Such an approach would be further investigated by De Finetti in his collaboration with Moretti. Having spent many years at the University of Trieste, De Finetti arrived in Rome in 1954 as a professor of Mathematics at the Faculty of Economics. was one of the first scholars to lecture on Ricerca Operativa, [46]

(operational research), a branch of applied mathematics which was making its way into the Italian academia and intellectual environment. It consisted of analysing and resolving complex decisional problems through the development of mathematical models and quantitative methods (simulation, optimization, etc.) to provide supporting insights in the decision-making process. It is worth noting that, around the same period and with different purposes, Bruno Zevi was elaborating his theory on Critica Operativa[47], a pedagogic and cultural enterprise which aimed to create a bridge between history and modern architecture. Zevi was advocating the actualisation of those immutable characteristics of historical architecture, read and reinterpreted in a contemporary key. [48].

The problem of establishing a link between theory and practice, between thinking and making, was clearly a defining trait of the Italian culture in the post-war era.

During those years, Moretti was developing his studies on parametric architecture, an approach that consisted in the application of mathematical theory to architecture and urbanism. However, having asked De Finetti to bring his collaboration to this new field of research, Moretti wanted to go beyond the declaration of theoretical principles and, in 1957, they became respectively president and vice-president of the newly founded Institute of Mathematical and Operations Research for Urbanism (IRMOU). With them were a group of young mathematicians, architects and engineers: Anna Cuzzer (then married to Paolo Portoghesi), Giovanni Cordella, and Cristoforo Sergio Bertuglia. Moretti’s idea was to apply a more scientific approach to the challenges of post-war reconstruction in Italy. IRMOU, in turn, aimed at employing mathematical and statistical methodologies to provide solutions that were considered quantitatively and qualitatively more effective for a truly modern country. Bruno De Finetti played a particularly important role, not just as a prestigious scholar but also because he introduced the Institute to the use of computational machines, such as the IBM 610, a fixed-point decimal electronic calculator used for probabilistic computation. De Finetti purchased the machine for the University and installed it in via Ripetta, establishing the institution’s first computing centre. At the time, Moretti was involved in some of the most important commissions of his career. In 1958 he led the team involved in creating the new Olympic Village for the XVII Olympics in Rome (1960).[49] Between 1960 and 1966, following up on the masterplan developed for the Olympics, together with Cafiero, Guidi and Libera, Moretti designed and built the housing project Quartiere INCIS Decima, where the buildings were arranged following the roman castrum.

Abroad, Moretti built the Watergate Complex in Washington (which would become infamous in the wake of the 1972 political scandal) and Montreal’s Stock Exchange Tower, both projects commissioned by the insurance company Generale Immobiliare.

In 1968, he was commissioned to design a sanctuary at Tagbha, on Lake Tiberias in Israel. The project was approved by the Vatican, but was never built due to the outbreak of war between Israel and Palestine. Moretti also had commissions in Kuwait (including the headquarters of the Bedouin Engineers’ Club and Bedouin Houses s) and in Algeria (Hotel El Aurassi, the Club des Pines and a series of schools and residential projects).

Moretti was also involved in the new masterplan for the city of Rome and, with IRMOU, carried out studies to analyse and alleviate traffic in the capital. These projects led to the plan for the new subway branch Termini-Risorgimento, which culminated in the realisation of the Pietro Nenni bridge over the river Tiber, as well as the new carpark under Villa Borghese which opened in 1973. Around the same period, he also realised the project for the Thermal Bath in Fiuggi, where he mastered the used of reinforced concrete.

Figure 7 - Study on Borromini: Sant’Ivo alla Sapienza Roma, 1967. Spatial interpretation: Juxtaposition interior and exterior space. (Archivio Moretti Magnifico)
Figure 7 – Study on Borromini: Sant’Ivo alla Sapienza Roma, 1967. Spatial interpretation: Juxtaposition interior and exterior space. (Archivio Moretti Magnifico)
Figure 8 - Study on Borromini: San Carlino alle Quattro Fontane Roma, 1967. Spatial interpretation: Juxtaposition interior and exterior space. (Archivio Moretti Magnifico)
Figure 8 – Study on Borromini: San Carlino alle Quattro Fontane Roma, 1967. Spatial interpretation: Juxtaposition interior and exterior space. (Archivio Moretti Magnifico)

Architettura Parametrica

Having spent about 20 years searching for the new relationship between architecture and mathematics, in 1960, Luigi Moretti was invited to the Milan Triennale to present the work and studies carried out with IRMOU on Parametric Architecture. While IRMOU’s work mostly focused on urbanism (urban planning, urban flows, etc.), for the exhibition in Triennale, Moretti developed parametric studies on sport and leisure facilities: a football stadium, an aquatic centre, a tennis arena and a cinema. At the time, football stadiums andsports arenas in general were relatively new typologies. In addition, unlike many of today’s venues, they were mono-functional. For this reason, stadia were the perfect typology to establish parametric relationships between different components: the position of the spectators in relation with the goals, the sightlines between every seat and different areas of the pitch, etc. Moretti and his collaborators elaborated mathematical formulas to describe these dependencies. The mathematical models produced data points representing the optimum viewing areas of the stadium. The data points were elaborated using an IBM 610 Auto-Point computer.

Moretti explains the “necessity to formulate new logical chains aimed at identifying new architectural forms and their concatenation, dependent on various and complex functions”.[50] For Moretti, “each logical area that makes up the sequence of this new formulation of architectural thought must be the receptor and projective of mathematical thought, that is to say, it needs to be quantifiable … The solution is based on the determination of the elements conditioning the forms as a consequence of the functions that are required of it. That is to say: solutions based on qualifiable parameters, parameters that, one by one and in their quantifiable interrelation, fix the limits within which we identify and draw the forms that fulfil those functions”. And again, “the definition of the parameters must be called upon to assist the techniques and instruments of the most current scientific thought; mathematical logic, operational research and computers. To the study of this approach and to the new method and theory specified in its schemes and verified by the first exciting results, I gave the name of Parametric Architecture”. Moretti elaborated his parametric manifesto on the pages of Moebius magazine, in an axiomatic text which established the heuristic principles of parametric architecture.[51]

Bruno Zevi was intrigued by this new approach. However, confirming his opposition to Moretti, he was far from being convinced. Following the opening of the exhibition, Zevi wrote a sceptical review of it on the pages of L’Architettura Cronaca e Storia:

“Everything that serves to give us distance from empiricism and rationalism in design should be applauded. Especially in a moment like the current one in which the characteristic of the [working method] of most Italian architects is careless … A parametric method encompasses the tools, procedures, and objectives, but to what end? For these questions, electronic brains are barely useful, brains are needed. If parametric architecture is not to remain a brilliant intellectual exercise, it is indispensable that research is sustained by a high moral inspiration. For now, the idea surprises and fascinates us; tomorrow, it may convince”. [52]

Here, Zevi aired a certain dissatisfaction for the unfulfilled promises of parametric architecture. A scepticism that, beside the great advances in parametric and algorithmic design, many still share today.

However, Luigi Moretti was aware of the “high moral inspiration” required to pursue the new course of architecture. In a lecture at the Accademia Nazionale di San Luca, in 1964, he claims that “the new basic meaning” of making architecture must be identified with the “genius of a new morality, of an interior commitment to working in accordance with justice, in a superior economy, for our fellow men. This imposes a dedication, a seriousness in research and investigations and, above all, an underlying humility”. [53]

Figure 9 - Spazio, n. 7 Rome, December 1952 – April 1953 - Michelangelo. Model of the church of S. Giovanni dei Fiorentini in Rome. Representation of the internal volumes (Archivio Moretti Magnifico)
Figure 9 – Spazio, n. 7 Rome, December 1952 – April 1953 – Michelangelo. Model of the church of S. Giovanni dei Fiorentini in Rome. Representation of the internal volumes (Archivio Moretti Magnifico)
Figure 10 - Spazio, n. 7 Roma, December 1952 – April 1953 from 'Strutture e sequenze di spazi', article by Luigi Moretti. Model of Guarino Guarini's church of S. Filippo Neri in Casale Monferrato. Representation of the internal volumes (Archivio Moretti Magnifico)
Figure 10 – Spazio, n. 7 Roma, December 1952 – April 1953 from ‘Strutture e sequenze di spazi’, article by Luigi Moretti. Model of Guarino Guarini’s church of S. Filippo Neri in Casale Monferrato. Representation of the internal volumes (Archivio Moretti Magnifico)

Epilogue

Moretti passed away suddenly in 1973. In his obituary, Zevi didn’t spare words of either admiration or criticism for his beloved enemy: “He possessed an authentic artistic temperament integrated with a notable if non-methodical culture and an extraordinary professional capacity. He could have assumed a determining role in the depressed Italian atmosphere; but a spasmodic desire for individual affirmation associated with an intellectualism like that of D’Annunzio, greedy for refinements and luxuries, reduced his creativity to insufferable conventionality. A waste in civil and human terms”.[54]

Moretti remained a controversial figure for many years after his passing. His legacy was long ignored or undervalued. However, much of the research and many of the questions raised by Moretti during his architectural life remained relevant and some still haunt architects today. What is the role of history in designing the city of today? What is the relationship between architects and technology? Is technology merely a tool to make or also a tool to think?

Moretti was aware of the necessity to not parametrise all things. He warned against “the dictatorship of the algorithm”. The Roman architect knew that his research was still far from the government of complex phenomena with suitable complex algorithms. He knew that architects “will have to educate the mind to scientific rigor knowing how to leave [their] imagination and expressive freedom intact, since free formal expression, personal lyricism, will always find a place in the spaces that the parametric functions will leave free”.[55]

One year before his departure, Luigi Moretti offered an interesting insight. In this brief excerpt from a conference titled “Technology and the ecological problem”,[56] he warned about the uncritical endorsement of new technologies, exposing the limits of his own thinking. While he seemed to have no doubt regarding the computational turn in architecture, he seemed to distance himself from any technocratic orthodoxy.

The authentic humanism in ancient civilization … was indeed a synthesis and integral consciousness of abstract thought … It is with the Enlightenment that an approximate rationality has entered, the production of algorithmic thought as something absolutely proper, acceptable, indeed dutiful and characteristic of man. … The whole critical situation of today’s world, from ecology to ethics, economy, politics, religion and spirituality is the result of two errors … Precisely:

1) the logic of algorithmic developments without limits;

2) [the validity of] this logic …, whatever the dimensions of the empirical field on which it operates.

Technologies produce mechanisms [that are] expressions of particular logical chains, dependent [on] or aroused by other logical chains. … Everyone now feels that it is not possible to continue with them indefinitely. This is obvious; … in the laws of technological development there is a need for a limit. … There is an asymptotic point for any technology beyond which it is in vain, it is foolish to proceed. … The limit of a technology is always inherent in it; it is equivalent to its death and death is an inseparable moment of the vital process in every organism …: we take logic and its algorithmic developments as valid whatever the dimensions of the empirical field on which they operate. This is false: the logical structures are NOT valid for each dimension of the field on which they are affected.

When I was preparing the exhibition of parametric architecture, which had this statement as a conducting background, Prof. De Finetti, one of the most acute intellects in today’s world, suggested to me as a slogan and introduction a stupendous step by Galileo, which roughly says: “if you want to make an animal fifty times bigger you will not have to enlarge the bones and structures fifty times, you will have to change material and study another completely different structure, otherwise you will make a fantasy monster” …

Now, in today’s world, the dimensions are enormously changed; … we continue to use concepts and logic, in the empirical life of our global community … and mustn’t the exceptional dimension of our empirical world lead to a completely new formation of knowledge (of thought)? How can we have logical chains that conclude with certainty as a good old syllogism? As we know, they will be only probable conclusions and consequent statistically verifiable situations. This concept of truth according to probability and statistics has for some time now come alive in every beat of our thought. [57]

On the one hand, he warns against the application of algorithmic processes to all the dimensions of knowledge, establishing boundaries to what can be known through algorithms and what should be left in the hand of the architect. On the other hand, the critique to empiricism leads Moretti to re-affirm a new form of scientific thought that advances by probabilistic attempts rather than by absolute truths. Thus, not dissimilarly from the logic of generative algorithms, Moretti understood that, in the new world, the algorithmic fitness of different parameters is to be found within the boundaries of a “search space” where truth is constantly fluctuant and, far from being univocal, has multiple probabilistic outcomes.

References

1 “Palazzina. This term, which came into use in the Renaissance as a term of endearment for palazzo, originally designated small buildings located within parks and gardens intended to offer asylum during parties and hunting parties … La Palazzina … thus began its disruptive parable towards the city in the 1920s, replacing the continuous fabric typical of the ancient city [with] a discontinuous fabric in which the building volumes are placed side by side without any formal relationship connecting them, divided only by a thin strip of green, usually divided by the high walls erected on the boundaries of the lots.” (P. Portoghesi, The Angel of History, [Bari: Laterza, 1982])

2 Adrian Sheppard, “Luigi Moretti: a testimony” (Montreal: 2008)

3 Marinetti wrote the manifesto in the autumn of 1908 and it first appeared as a preface to a volume of his poems, published in Milan in January 1909. It was published in the Italian newspaper Gazzetta dell’Emilia in Bologna on 5 February 1909, then in French as Manifeste du futurisme (Manifesto of Futurism) in the newspaper Le Figaro on 20 February 1909. Luigi Moretti was born in Rome on 2 January 1907.

4 “To develop a complete mind: Study the science of art; Study the art of science. Learn how to see. Realize that everything connects to everything else.” Leonardo Da Vinci

5 B.Baldi, Le Vite de’ Matematici, 1587–1595, cit. in F.Abbri, E.Bellone, W.Bernardi, U.Bottazzini, P.Rossi (eds), Storia Della Scienza Moderna e Contemporanea. Dalla Rivoluzione Scientifica all’eta’ dei Lumi 1, 136, TEA, 2000

6 Luigi Moretti, Forme Astratte Nella Scultura Barocca, Spazio n.3, 20, October 1950

7 Andre Chastel introduced the notion of “mathematical humanism” in his book Centri del Rinascimento: Arte italiana 1460-1500 (Milan: Feltrinelli, 1965). Chastel identifies three strands of humanism and specifies that the mathematical one “finds its most important base in Urbino” (41), noting that “the case of Luca Pacioli is not isolated: on the contrary, it well represents the intellectual environment of the quattrocento, an environment in which theory and practice walk hand in hand without, however, adapting themselves to one another perfectly” (47, 49).

8 Luca Pacioli, De Divina Proportione, Aboca Museum, San Sepolcro, 2009

9 Trattato d’Abaco (Abacus Treatise), De quinque corporibus regularibus (On the Five Regular Solids) and De Prospectiva pingendi (On Perspective in painting).

10 Scaglia, Gustina, Francesco Di Giorgio: Checklist and History of Manuscripts and Drawings in Autographs and Copies from Ca. 1470 to 1687 and Renewed Copies, Lehigh Univ Pr, 1992

11 Literary works of architectural history such as Der Cicero by Jacob Burckhardt (1855), Studien zur Architektur geschichte des 17. und 18. Jahrhunderts by Robert Dohme (1878), Renaissance and Baroque by Heinrich Wölfflin (1888), and Barock und Rococo by Auguste Schmarsow (1897), prepare the ground; added to them at the beginning of the twentieth century were Michelangelo als Architekt by Heinrich von Geymüller (1904) and Die Entstehung der Ba rokkunst in Rome by Alois Riegl (1908). In the aftermath of the Great War, came Michelangelo-Studien by Dagobert Frey (1920) and the volume on Borromini by Eberhard Hempel (1924).

12 L. Moretti, op. cit. in Casabella LXX (2006), .78-79.

13 Red Desert, director M.Antognoni, written by M.Antognoni, T.Guerra, starring M.Vitti, R.Harris, C.Chionetti, Italy, 1964

14 Michelangelo: The Man with Four Souls, directors: L.Morfetti, C.Conrad, Italy, 1964

15 L. Moretti e Charles Conrad, presentation to the premier of the movie ’Michelangelo‘ at Circolo del P Greco, Roma, Hotel Hilton, 14 Luglio 1964 (Archivio Moretti Magnifico).

16 P. Portoghesi, B. Zevi (eds.), Michelangiolo architetto (Torino: Einaudi, 1964), with Giulio Carlo Argan, Franco Barbieri, Aldo Bertini, Sergio Bettini, Renato Bonelli, Decio Gioseffi, Roberto Pane, Paolo Portoghesi, Bruno Zevi, and Lionello Puppi.

17 A. M. Turing, On Computable Numbers, with an Application to the Entscheidungsproblem, proceedings of the London Mathematical Society 1937

18 A.Imperiale, “An ‘Other’ aesthetic: Moretti’s Parametric Architecture”, Log 44 (2018)

19 D’Arcy Thompson, On Growth and Form, Cambridge University Press, 1917

20 L. Sinisgalli, “Natura, Calcolo, Fantasia”, Pirelli 3 (1951) 54-55.

21 Opera Nazionale Balilla (ONB) was an Italian Fascist youth organization functioning between 1926 and 1937, when it was absorbed into the Gioventù Italiana del Littorio (GIL), a youth section of the National Fascist Party.

22 Robert Venturi, Complexity and Contradiction in Architecture, The Museum of Modern Art, New York, 1966.

23 Stanislaus von Moo, Venturi, Rauch, & Scott Brown: Buildings and Projects (New York: Rizzoli, 1987),244-246

24 Spazio made its debut in July 1950 taking the form of a grandiose project, capable of combining typographic and contributor quality, investments (editorial staff in Milan, Rome, and later Florence and Paris), international screening (abstract in English, French and Castilian). The director’s writings are numerous and of absolute importance. The editor-in-chief, Agnoldomenico Pica, is the author of several texts and is flanked by recurring collaborators: Umberto Bernasconi, Angelo Canevari, Gino Severini, Sisto Villa, Ugo Diamare. Over the course of 7 issues the magazine has promoted artists and architects such as Carlo Mollino, Giuseppe Capogrossi, Alberto Burri, Renzo Zanella, Antonio Gaudi, Adalberto Libera, Ugo Carrà, Vico Magistretti, Carlo De Carli, Ettore Sottsass, Atanasio Soldati, Gianni Monnet, Vittoriano Viganò, Franco Albini, Carlo Pagani, and Luciano Baldessari. The layout was masterful, governed with skilful technique, taste and originality by the director himself

25 L.Moretti, “Ecclecttismo e Unità dei Linguaggi”, Spazio1 (1950)

26 “For me personally, the search for this secret fabric as a link between the various elements of a work, which renders, or attempts to render, the single forms as interrelated parts to the others, in a consciously inseparable fabric, is the habitual way of consider a work (descendant) above all from the eighteen pages of Galois that opened the new objective world to us as a reality of pure interrelations”. “Ultime Testimoninaze di Giuseppe Vaccaro”, L’Architettura Cronaca e Storia,201(1972).

28 L.Moretti, “Ecclecttismo e Unità dei Linguaggi”, Spazio 1, (1950).

29 L.Moretti, “Genesi di Forme dalla Figura Umana”, Spazio 2 (1950).

30 L.Moretti, “Forme Astratte nella Scultura Barocca”, Spazio 3 (1950).

31 L.Moretti, “Trasfigurazioni di strutture murarie”, Spazio 4 (1951).

32 L.Moretti, “Valori della modanatura”, Spazio 6 (1952).

33 L.Moretti, “Discontinuità dello Spazio in Caravaggio”, Spazio 5 (1951).

34 L.Moretti ”Strutture e sequenze di spazi”, Spazio 7 (1953)

35 L.Moretti, “Struttura come Forma”, Spazio 6 (1952)

36 Un Art Autre Ou il s’Agit de Noveaux Dévidages du Reel (Paris: 1952).

37 In the article “Structure comme forme”, published on the United States Line Paris Review, Moretti defines the mathematical equivalent of what he sees in Capogrossi paintings as the theory of differences, which he develops into a method to design dynamic architectural forms.

42 Pierre Pascal (curated by) Apokalypsis Ioannoy ou la Revelation de Notre Seigneur Jesus-Christ a Saint Jean, more often titled Apocalypsis Iesu Xristi / for the first time paraphrased in Alexandrian verse by Pierre Pascal (A l’enseigne du Coeur Fidele, Roma 1963)

43 The Apocalypse (Latin: Apocalipsis cum figuris) is a series of fifteen woodcuts by Albrecht Dürer published in 1498, depicting various scenes from the Book of Revelation, which rapidly brought him fame across Europe.

44 The Via Panisperna boys (Italian: I ragazzi di Via Panisperna) were a group of young scientists led by physicist Enrico Fermi. In Rome, in 1934, they made the famous discovery of slow neutrons, which later made possible the nuclear reactor and subsequently the construction of the first atomic bomb.

45 The magazine La Città: Architettura e Politica was founded and directed by Giuseppe De Finetti in 1945. Only four issues were published between 1945 and 1946. The aim was to discuss “the study of the future city”. The magazine mainly discusses the problems of reconstruction, the fate of the cities destroyed by the two wars, and the problems of traffic; “the task of rebuilding the city, of giving it back its usefulness and beauty”.

46 B. De Finetti, “Gli strumenti calcolatori nella Ricerca Operativa”, Civiltà delle Macchine, 5, 1 (1957), 18–21.

47 B. Zevi exposed his ideas regarding the relationship between architectural history and contemporary design in the opening lecture of the
academic year, held in the Aula Magna of the Rectorate of the University of Rome, on the 18th of December 1963.

48 In addition to Moretti, the team for the new Olympic Village in Rome was formed by Vittorio Cafiero, Adalberto Libera, Amedeo Luccichenti and
Vincenzo Monaco

49 L. Moretti, “Ricerca Matematica in Architettura e Urbanistica”, letter to Giulio Roisecco, director of Moebius magazine

50 L. Moretti, Moebius, IV, 1 (1971), 30–53.

51 B. Zevi, “Cervelli Elettronici? No Macchine Calcolatrici”, in L’architettura Cronaca e Storia VI, 62 (1960), 508-509, (translation A. Imperiale)

52 L. Moretti, “Significato attuale della dizione Architettura”, in Spazio, Fascicoli(1964). See also: Luigi Moretti, “L’Applicazione dei metodi della
Ricerca Operativa nel campo dell’urbanistica”, in Spazio, Fascicoli, (1960); Luigi Moretti, “Strumentazione scientifica per l’urbanistica”, in : Cultura
e realizzazioni urbanistiche, Convergenze e divergenze, conference proceedings, held at Fondazione Aldo Della Rocca, Campidoglio, Consiglio
Nazionale delle Ricerche, (Rome: 1965).

53 B. Zevi, “Computer inceppato dal dannunzianesimo,” L’Espresso (July 29, 1973), reprinted in Cronache di Architettura 2, 982 (Bari: Laterza,
1979), 145.

54 L. Moretti, “Architecture 1965: Évolution ou Révolution”, L’Architecture d’Aujourd’hui, 119 (1965), 48.

55 “Tecnologia e problema ecologico”, round table with the participation of V.Bettini, S. Lombardini, L.Moretti and P.Prini. Civilta delle Macchine 3-
4 (1972)

56 Ibidem

Suggest a Tag for this Article
Daniel Koehler, 2020.
Editorial Note
25/11/2020
Editorial Note, Mereologies, Mereology, The Bartlett
Mollie Claypool
University College London
mollie.claypool@gmail.com
Add to Issue
Read Article: 912 Words

Welcome to Prospectives.

Prospectives is an open-access online journal dedicated to the promotion of innovative historical, theoretical and design research around architectural computation, automation and fabrication technologies published by B–Pro at The Bartlett School of Architecture, UCL. It brings the most exciting, cutting-edge exploration and research in this area onto a global stage. It also aims to generate cross-industry and cross-disciplinary dialogue, exchange and debate about the future of computational architectural design and theoretical research, linking academic research with practice and industry. 

Featuring emerging talent and established scholars, as well as making all content free to read online, with very low and accessible prices for purchasing issues, Prospectives aims to unravel the traditional hierarchies and boundaries of architectural publishing. The Bartlett supports a rich stream of theoretical and applied research in computational design, theory and fabrication. We are proud to be leading this initiative via an innovative, flexible and agile website. Computation has changed the way we practice, and the theoretical constructs we use – as well as the way we publish.

Prospectives has been designed to be a part-automated, part-human, multiplicitous platform. You may come across things when using it that do not feel, well, quite human. You may not realise at first that you are looking at something produced by automation. And because every issue is unique yet sitting within a generative framework this may mean you see the automation behind Prospectives do things that humans may not do.

Furthermore how you engage with Prospectives is largely left up to the reader. You can read our guest-curated issue, and use the tags to generate your own unique issue – an ‘issue within an issue’ – or read individual articles. You can also suggest new tags to be adopted by articles. We hope this provokes new ways of thinking about the role that participation, digitisation and automation can play in architectural publishing. Prospectives in a work-in-progress, and its launch is the first step towards fulfilling a vision for new kinds of publishing platforms for architecture that play with, and provoke, the discourse on computation and automation in architectural design and theory research.

Issue 01: Mereologies

“Mereologies”, or the plural form of being ‘partly’, drives the explorations bundled in the first issue of Prospectives, guest curated by Daniel Koehler, Assistant Professor at University of Texas at Austin, previously a Teaching Fellow at The Bartlett School of Architecture from 2016 to 2019.

Today, architects can design directly with the plurality of parts that a building is made of due to increased computational power. What are the opportunities when built space is computed part-to-part? Partly philosophy, computation, sociology ecology and partly architecture, each text – or “mereology” – contributes a particular insight on part relations, linking mereology to peer-to-peer approaches in computation, cultural thought, and built space. First substantiated in his PhD at the University of Innsbruck, published in 2016 as The Mereological City: A Reading of the Works of Ludwig Hilberseimer (transcript), Daniel’s work on mereology and part-hood – as an nuanced interplay and blurring between theory and design – has been pivotal in breeding the ground for an emerging generation of architects interested in pursuing a new ethical and social project for the digital in architecture. The collection of writings curated here included postgraduate architecture and urban design students (both his own, and others), architecture theorists, designers, philosophers, computer scientists and sociologists. The interdisciplinary nature of this issue demonstrates how mereology as a subject area can further broaden the field of architecture’s boundaries. It also serves as a means of encapsulating a contemporary cultural moment by embedding that expanding field in core disciplinary concerns.

The contributions were informed by research and discussions in the Bartlett Prospectives (B-Pro) at The Bartlett School of Architecture, UCL London, from 2016 to 2019, culminating in an Open Seminar on mereologies, which took place on 24 April 2019 as part of the Prospectives Lecture Series in B-Pro. Contributors to this issue include: Jordi Vivaldi, Daniel Koehler, Giorgio Lando, Herman Hertzberger, Anna Galika, Hao Chen Huang, Sheghaf Abo Saleh, David Rozas, Anthony Alvidrez, Shivang Bansal and Ziming He.

Acknowledgements

Prospectives has been a work-in-progress for almost 10 years. The dream of Professor Frédéric Migayrou (Chair of School and Director of B–Pro at The Bartlett School of Architecture) when he arrived at The Bartlett in 2011, I became involved in the project when I joined the School 1 year later. It has been a labour of love and perseverance since. It is due to the fervent and ardent support of Frédéric, Professor Bob Sheil (Director of School), and Andrew Porter (Deputy Director of B–Pro) that this project later received funding in 2018 to formalise the development of Prospectives. To the B–Pro Programme Directors Professor Mario Carpo, Professor Marcos Cruz, Roberto Bottazzi, Gilles Retsin and Manuel Jimenez: I am thankful for your guidance, advice and friendship which has been paramount to this project. Colleagues such as Barbara Penner, Yeoryia Manolopoulou, Barbara Campbell-Lange, Matthew Butcher, Jane Rendell, Claire McAndrew, Clara Jaschke and Sara Shafei have all given me an ear (or a talking to!) at various stages when this project most needed it.

Finally, it is important to say that schools of architecture like the Bartlett have cross-departmental and cross-faculty teams who are often the ones who breed the ground for projects such as Prospectives to be possible. The research, expertise and support of Laura Cherry, Ruth Evison, Therese Johns, Professor Penelope Haralambidou, Manpreet Dhesi, Professor Laura Allen, Andy O’Reilly, Gill Peacock, Sian Lunt and Emer Girling has been vital – thank you.

Suggest a Tag for this Article
Mereologies Open Seminar: Round Table Discussion
25/10/2020
Architecture, Composition, Discussions & Conversations, Mereologies, Mereology, Open Seminars
Daniel Koehler, Mario Carpo, Emmanuelle Chiappone-Piriou, Giorgio Lando, Philippe Morel, Casey Rehm, David Rozas, Jose Sanchez, Jordi Vivaldi
University of Texas at Austin
daniel.koehler@utexas.edu
Add to Issue
Read Article: 6353 Words

Participants: Emmanuelle Chiappone-Piriou, Jose Sanchez, Casey Rehm, Jordi Vivaldi, David Rozas, Giorgio Lando, Daniel Koehler with questions from the audience including Mario Carpo and Philippe Morel.

Daniel Koehler: The talks of the symposium were diverse and rich but also abstract, and intentionally external to architecture. At such a point it can be asked if, how, and what role Mereologies can play in architecture? For the discussion we are joined by additional architects with unique angles on composition and part-thinking in their work. Casey Rehm, a computational designer, Jose Sanchez, who is working actively with digital models of participation and Emmanuelle Chiappone-Piriou, an ecological thinker, experienced in the history of architecture. 

José Sanchez: My first reaction to the presentations is controversial. I think it presents well much of the work that is happening in architecture at the moment showing an interest in Mereology and discrete architecture. However, looking at the issue of parts is fundamentally a project where the idea of composition and the idea of structure is relevant as well. Patterns organised by parts can potentially deal with different forms of value. So, in a way, I find a surprising rejection in some of the ideas. 
Mereology seems to be giving us a framework for many different positions to coexist, and I think that we did an excellent presentation of a much clearer advocacy for a form of relations that we might desire that has to do with pre-production, more like an agnostic framework that allows to give us a vocabulary. Are we interested in having advocacy, in having that intentionality, or are we more interested in what the ontology should be or the framework that we are going to work in?

Daniel: I have learnt something from Giorgio’s book that when we define Mereology, it comes in different notions and ranges. On the one hand, you can see it as a distinct theory, as a specific project that has its own agenda. But also, and more crucial in the first place: you can take Mereology as a larger framework to talk about the relations of parts to wholes – simply compositions. OK, but you might ask: why don’t we use the term composition directly? Because, composition has a specific connotation in architecture and refers to the Ecole des Beaux Arts, classical means of relating objects. It was rejected by the Bauhaus, which promoted a different form of composition with modern means. We could continue this through the history of architecture. In architecture, composition is a specific style but not a history. How could we compare those different modes of architectural composition? Can we think of something parallel to morphology or typology which would allow us to compare a plurality of relations between parts and wholes without defending a certain style? When the formal readings of parts turn into their own project, it might be quite valuable that one can figure a figuration without predefining its value by imposing a structure. That might be Mereology as a project. But first of all, the question is how can we intentionally speak about parts? That would be Mereology as a methodology.

Giorgio Lando: I agree with Daniel that it is very important to distinguish various ways in which the word “Mereology” can be legitimately meant. In particular, the word “Mereology” stands in some cases for a specific theory of parthood and composition, and this theory may be such that structure has a role in it, or such that structure has no role in it. A historically important kind of mereological theory, Classical Mereology, is of the latter kind: it is deliberately blind to structure in providing existence and identity conditions for complex entities. In other cases, however, the word “Mereology” stands for an entire field of research, within which competing theories disagree about the role which structure should – or should not – play. If Mereology is seen as a field of research, then it is misleading to say that structure plays no role in it. This equivocation may explain some of José’s perplexities. 
However, some other perplexities are likely to persist even once we disambiguate the word “Mereology”, and we focus on Classical Mereology. Classical Mereology indeed includes some highly counterintuitive principles, and the usual reaction of the layman to these principles is to dismiss them rather quickly. For example, it might seem prima facie incredible that the order of the parts of something does not matter for the identity conditions of complex entities. However, this quick dismissal is usually determined by an equivocation: what is actually incredible is that the order of the parts of a building, or of a village, or of a car does not matter for its nature, for what that building, that village or that car is. However, this is not what Classical Mereology claims. What Classical Mereology claims is weaker and more reasonable: it says that the order of the parts does not matter for the identity conditions of complex entities, such as buildings, villages and cars. 
According to Classical Mereology, it never happens that there are two distinct entities which only differ because of their structure. Classical Mereology is not committed to the frankly incredible claim that structure has no impact on the nature of complex entities, but only to the more reasonable claim that complex entities are never distinct only in virtue of their structure. 
Moreover, this claim of Classical Mereology is restricted to single concrete entities. This might make the confrontation between Classical Mereology and other disciplines, such as architecture, troublesome, inasmuch as these disciplines are more interested in abstract types than in concrete tokens, more interested in repeatable entities than in their single, concrete instantiations. As far as I understand, when architects speak about the parts of a building or of a city, in most cases they are not speaking about a single piece of material and the way in which it is composed, but about a type of building and the fact that there are different types of buildings which result from the combination of the same types of architectural elements, differently combined. 
Once you move from this level of types and abstract entities to the level of concrete entities, the claim of Classical Mereology that structure has no role in the identity conditions of complex entities is much less incredible: consider a single, concrete building (not a type of a building) in a certain moment in time. In that moment, its parts are structured only in one way: the parts of a single, concrete building cannot be structured in two different ways at the same time.
Architects might legitimately retort that architecture is about repeatable types of buildings, about projects which can be applied several times. Given this approach, Classical Mereology is probably not the best tool for modelling repeatable types, and it is indeed desirable to look at different theories, which are not deliberately blind to structure. Mathematics is full of tools which can be employed to this purpose, including set theory and various kinds of algebras. Architects may legitimately wonder why philosophers focus on Classical Mereology instead, which is a serious candidate for the role of sound and exhaustive theory of parthood and composition for single concrete entities, but not for abstract types. The reason is probably a sort of deep-seated philosophical skepticism towards abstract entities, and the idea that fundamental reality consists of concrete entities, while abstract entities are less fundamental, or even a mere construct of the human mind.list or minimalistic inclinations
However, it is not the case that all the philosophers working on Mereology endorse the claims of Classical Mereology. In particular, in the literature of the last ten years, many prominent philosophers (such as Karen Bennett, Aaron Cotnoir and Katherine Hawley) have by contrast argued that Classical Mereology is completely misguided, and that we should also pay attention to structure within the realm of concrete entities. In my book I have defended the claim that, by contrast, Classical Mereology is a perfectly adequate theory of parthood and composition for concrete entities, but many other mereologists disagree with me. More in general, there is virtually no claim about parthood and philosophy about which every philosopher agrees! 

Mario Carpo: Giorgio, you have said that at some point Mereology merges with set theory. What exactly is here the overlay or intersection between Mereology and set theory? In reverse, where is Mereology separating itself from set theory, and where are the core differences?

Emmanuelle Chiappone-Piriou: Is there any way that relates Mereology to category theory?

Giorgio: For what concerns the relation between set theory and Classical Mereology (which, as we have seen, is a specific theory, which is mainly designed to characterise the realm of concrete entities and the way in which they are part one of another), the deepest difference consists in the transitivity of the relation: the relation of parthood in Classical Mereology is transitive, while the relation of elementhood in set theory is not transitive. Thus, if a first entity is part of a second entity and the second entity is part of a third entity, then – according to Classical Mereology – the first entity is part of the third entity. By contrast, it can happen that something is an element of a set, which in turn is an element of a second set, while that something is not an element of the second set. Sets are stratified: you have typically sets of sets of sets. In Classical Mereology, as a consequence of the transitivity of parthood, there are no stratified complex entities. 
While there are many interesting ties between set theory and Mereology, I am unaware of any connection between Mereology and category theory.

Mario: Can you give us maybe an example, like three inclusions in set theory and three inclusions in Mereology?

Giorgio: Consider the set of Italians. I am a member of this set. The set of Italians is also a member of the set of European people. However, I am not a member of this latter set, inasmuch as I am not a European people (I am not a people at all!). We thereby obtain a failure of transitivity of elementhood among sets. Nothing similar is admitted by Classical Mereology: I am part of the fusion of Italians, the fusion of Italians is part of the fusion of Europeans, and I am part of the fusion of Europeans as well.

Mario: So, in set theory, these don’t happen?

Giorgio: It does not happen in the sense that it does not always happen. There are indeed cases in which the same elements appear at different levels of the set-theoretical hierarchy, but this does not happen in general, and is not warranted by any principle of set theory. There are actually many varieties of set Theory, but in no variety of set Theory is elementhood transitive.

Philippe Morel: My feeling is that Mereology is a matter of “technicalities” about a relationship that exists in set theory. If you look at the inclusion as the property you are also looking for in Mereology, I don’t really get what Mereology brings on top of the purely mathematical “canonical” set theory. It gives me the feeling that Mereology is foremost a way (or a “trick”) for philosophers to take control of a theory that escapes them because it is a fully mathematical theory… So, this is why I have a bit of a problem with this notion because again, technically speaking, I still can’t make a clear distinction between the philosophical property and the mathematical property. It is like a layer of metaphysics that is brought on top of the mathematical theory and of course I can’t consider this as a great addition. My second issue is more of a general remark. Why don’t you speak about relational databases like SQL databases? At some point, to my understanding, it is a very practical implementation of what describes Mereology, because it is all about belonging, etc.
Though, I find the mereological approach interesting, especially if it prevents a reintroduction of composition, as I see a danger of bringing back this concept of composition in architectural discourse.

Giorgio: You are right: set-theoretical inclusion (i.e., the relation of being a subset) has precisely the same formal feature of mereological parthood. However, set-theoretical inclusion is not the fundamental relation of set theory: it is definable in terms of set-theoretical elementhood, while set-theoretical elementhood is not definable in terms of set-theoretical inclusion. Thus, the fundamental relation of set theory is elementhood and is not transitive, while the fundamental relation of Classical Mereology is parthood, which is transitive. 
There have been several attempts (for example in Parts of Classes, a book by David Lewis) to exploit the formal analogy between mereological parthood and set-theoretical inclusion in order to reduce set theory to Classical Mereology. The biggest obstacle for this project are set-theoretical singletons, i.e. sets with a single element. The relation between these single elements and their singletons is not easily reducible to Mereology: it is a kind of brute stratification (a form of structure), which has no place in Classical Mereology.
I agree with Philippe’s remark that Classical Mereology is nowadays a mathematically uninteresting theory, in spite of the fact that it has been originally elaborated by great mathematicians such as Stanisław Leśniewski and Alfred Tarski: it is simply a complete algebra without a zero object. The reason why philosophers discuss Classical Mereology does not depend on its alleged mathematical originality: some philosophers (including me) think that this very simple and unoriginal mathematical theory is the sound and complete theory of parthood and composition, at least in the realm of concrete entities. Thus, the reason to be interested in Classical Mereology is not its mathematical originality, but its plausible correspondence with the way in which parthood and composition really work.
As far as datasets are concerned, I think that it is prima facie preferable to construe them as sets rather than as mereological wholes. Indeed, the distinction between inclusion and elementhood is pivotal for datasets. This distinction characterises set theory, while there is no analogous distinction in Classical Mereology.

Daniel: I would like to extend on Giorgio’s point that Mereology offers mathematically an algebra without a zero object. Mereology starts with individuals without defining a set in the first place. In Mereology, you can’t have an empty set, a null set, a zero object. You can’t have a building without building parts. You need parts for thinking a building. This will become more dominant in future because with higher computing capabilities we are able to compute more and more without the need of abstract models. Take as an example the Internet of Things: a building environment where every building part has sensors and is connected. That means that very literally building parts can talk with each other. Such a building environment also participates, and will offer its own economy. Here, value begins with a building part as an active participant in the market. Already in daily BIM practice it is impossible to think of a building without its parts. So, we should also stop thinking of buildings as predefined sets.
To my understanding, a database is constructed on a very specific ontological worldview. Today’s databases take Composition-as-Identity. This principle says that everything is included in the distribution of data points. Nothing above the distribution of atoms exists, not any compound meaning. Whereas, compounds are fundamental to architecture. Just think of a typology; you can’t reduce a façade to windows. What does a courtyard actually exist of? This of course does not relate to math but to philosophy. It is controversial, otherwise it would not be philosophical. Every building is controversy, or call it multiplicitous, because architecture is pre-logical in a sense. We can’t reduce architecture to math. It is also the point where the discussions on beauty depart in architecture. With ease you can describe a building in the first instance through the distribution of its cells. You can describe a housing project just through the part-relation of a shared wall between two flats only. But how do you describe the mountain which Moshe Safdie designed by stitching together the shared walls of flats in such a way that their roofs turn into terraces? Architecture starts where it exceeds simplicity. Yes, we can design buildings with the use of databases with ease. We are able today to compute buildings without structures. But where are their compound meanings? It will be fundamental to find a way to compute what is common, what is collective between the parts. Therefore, I think we should be suspicious of databases or any kind of structural models which were thought without any compound meaning, so to say, without architecture in the first place.

Jose: I’ll re-bring some of the points that Jordi made to the conversation. Jordi, you brought up Graham Harman’s concept of a radical present. I find it kind of controversial that it seems to eradicate a form of speculation, a form of potential, a form of endless abstractness. If we’re moving from the classic Mereology towards a more abstract sense, I think that a lot of architecture production that we discuss especially with discrete projects – that has to do with parts – has to do with potential encounters of entities in that list and is not purely defined by the actual instantiation of the actual encounter of entities. So, we evaluate and design, also thinking that encounters might never happen. So, under the umbrella of that radical present, I wonder what do you see in them?

Jordi Vivaldi Piera: I would say that the term “potential” is misleading. Its meaning generally refers to its capacity to produce other realities, but at the same time it undermines the possibility of novelty because it assumes that an object already contains what it will become. In this sense, I emphasise radical presence in order to understand which object’s “actualities” permit the production of novelty, rather than understanding which are the hypothetical novelties that it contains and therefore at the same time undermines. In this sense, I interpret potentiality as a particular type of actuality.

Casey: I was interested in Daniel’s point; it reminds me of a recent article by Luciana Parisi called “Reprogramming Decisionism”, where she’s talking about machine learning, neural networks and that these technologies in essence assemble. With this, fact is accumulated, which says that something is probably something else. I’m interested in this relative to Mereology and also the statement that a human deals with abstraction but a machine deals with simple facts. How does the mereological project deal with probability? Is that something probably something rather than not? How does the part, certainly something like, you know, the models that you have shown us rely on clear logic? As I nearly understood there is a kind of model that you’re describing, but how does Mereology deal with improbability? I think it is also something that is going to face the design profession in relationship to the kinds of machines which deal with things. 

Giorgio: As far as probability is concerned, I do not envisage any specific, direct problem stemming from the interaction of probability and Mereology. A mereological claim can have a certain degree of probability, and the probability at stake can be either objective/statistical or subjective. In neither case are there specific problems: mereological claims are, from this viewpoint, on a par with other claims. 
While probability is not directly troublesome, there are some potential problems in the vicinity: Classical Mereology does not countenance the hypothesis that an entity is part of another, but only at a certain degree. Consider a cloud in the sky: the water molecules in the centre of the cloud are definitely parts of the cloud, and the molecules far away from the cloud definitely are not parts of the cloud. However, there seems to be a grey zone of molecules, which are neither definitely within the cloud nor definitely out of it. 
These scenarios can be treated in various ways, and the approach depends on the adoption of a certain theory of vagueness. According to the so-called epistemic theory of vagueness (set forth for example by Timothy Williamson), the fact that we are unable to identify the boundaries of a cloud depends on our epistemic limitations (we are unable to identify the boundaries of the cloud, but this does not show that the cloud has in itself no definite boundaries). According to the semantic theory of vagueness (in the version adopted for example by David Lewis), there are actually myriads of clouds and each cloud has precise boundaries; however, our discourses about the cloud are semantically underdetermined, inasmuch as we have not decided which among the myriads of clouds in the sky we are speaking about. Both the epistemic theory of vagueness and the semantic theory of vagueness are perfectly compatible with Classical Mereology, because they locate vagueness in our language or in our epistemic practices and not in reality: in reality, given two entities, either the former definitely is part of the latter, or the former definitely is not part of the latter.  
However, recently also the so-called ontological theory of vagueness (Michael Tye is one of the most ardent advocates of this approach to vagueness) has gained some traction. According to the ontological theory of vagueness, vagueness is in reality, and this happens also in the mereological case of the cloud: the molecules at the periphery of the cloud are neither definitely parts nor definitely non-parts of the cloud. The adoption of the ontological theory of vagueness indeed requires a revision of Classical Mereology. According to Classical Mereology, for example, two complex entities are identical if and only if they have the same proper parts (the proper parts of something are those parts of it which are not identical to it): but this principle is not applicable to entities which have no definite domain of proper parts. According to the ontological theory of vagueness, this is what happens in the case of the clouds and in similar cases. To sum up: probability and various theories of vagueness (such as the epistemic theory and the semantic theory) do not require any departure from Classical Mereology; only the ontological theory of vagueness requires a departure. 

Emmanuelle: It appears we are navigating and combining different sets of discourses that may or may not be consistent with one another, nor with Mereology as it appears here to be merge into a compositional paradigm: we are simultaneously addressing materiality and formal systems, social coherences and principles of governance, all at once.
I believe that, as in the 1950s and 1960s, architecture faces the risk of talking itself into an impasse, by resorting to certain languages and positions that may induce, and reproduce, a reification of social patterns. 
In this context I often think of a remark from Michel Ragon, the French architecture critic who wrote about and promoted experimental architecture in the 1960s. Looking back at those projects, twenty years later, he asked himself how a “life-like” macro-structure could be designed in advance, and if it could be designed at all, considering life is “rightly made of chance and unpredictability”. This remains a valid and important question, which is updated by our resort to instruments that allow us to think of, and manipulate, the world in terms of particles and parts. Quantum physics teaches us that there is irreducible uncertainty in our physical existence, an inherent contingency, and that there is a fundamental limit of precision with which you can actually measure a particle, hence a limit to the precision with which you can grasp the world. How is it that this uncertainty can be taken into account when dealing with matter or with information; and, when dealing with parts, how can we do so without first defining them? How can we account for interactions and relationality? How is it that we can account for change, for performance and transformation, all at once?
This brings me to a second point that stems from this a priori impossibility to capture the image of life without “to some extent captur[ing] life itself” (Ragon). I understand that Mereology makes a claim for exhaustibility and generality. But what if we take this claim into the architectural project? Do we think that we can actually design a system, a structure or a whole whose formal principles allow for it to be exhaustive? Following Gödel, I understand that you either have exhaustibility or consistency, but not both. 

Mario: Can I go back to the branch of theoretical philosophy to cover things? We more or less know why we in the design profession became interested in particles, and the relation between particles, in recent years. It seems he (Daniel) came across the term Mereology. He hijacked it and imported it into the architectural discourse. Like we always do. We take a more refined tool which comes from another discipline, and then we appropriate it and give it another meaning which means nothing to you (Giorgio). This we have been doing for a long time. This part of the story we know. The part of the story that we don’t know, that you can tell us in two lines is, does this happen with Mereology? Can you give us an outline of the history of analysis of Mereology in contemporary philosophical discourse? Because when I was a student nobody mentioned Mereology, and now everyone does? When did that happen? Where does this come from? And from a distance, from a critical point of view, why is it that you right now are talking about Mereology while many years ago nobody talked about it?

Giorgio: The word ‘Mereology’ is rather new and was made relatively popular by Stanisław Leśniewski at the beginning of the 20th century (according to Leśniewski, Mereology was more properly a branch of logic). However, philosophers (and in particular metaphysicians) have always used the notion of part and set forth theories about it. Plato’s theory of parthood has been recently analysed and defended by Verity Harte, while Aristotle’s theory of parthood is considered by several neo-Aristotelian metaphysicians a viable option in the contemporary mereological debate.

Mario: But, in math, there are fractions, proportions, modularity. These are all today discussed as mereological questions.

Giorgio: An important difference between many past theories of parthood (in particular in Ancient and Medieval philosophy) and contemporary Mereology concerns the expected domain of application: Plato, Aristotle, Abelard and Ockham were for example mainly interested in the parthood relation which connects a property with an individual instantiating those properties, or two properties one with another. These instances of parthood were important within metaphysics itself, for example when a theory of ideas or universals was elaborated. By contrast, contemporary Mereology is more focused on the concrete, spatio-temporal parts of concrete entities.
However, no matter what the original domain of application of the parthood relation was, the theories of parthood became progressively more abstract and formal: in some works of Leibniz (17th century), for example, it is possible to find a formally complex and highly abstract theory of parthood, whose principles are expected to hold irrespective of the domain of application. This is also the case of the theory of parthood developed by Bernard Bolzano in the 19th century. Thus, in spite of the fact that the word ‘Mereology’ became popular only in the 20th century, contemporary Mereology has solid roots in the history of philosophy. 
Nonetheless, it is true that – for example – forty years ago Mereology was much less popular than nowadays. This may have depended on the alternating fortunes of metaphysics (the wider branch of philosophy to which Mereology belongs) in analytic philosophy. Forty years ago analytic philosophers, in continuity with logical positivism, often despised metaphysics as an obsolete leftover from the past. This has changed dramatically in the later decades, thanks to the influence of thinkers such as David Lewis and Saul Kripke, and metaphysics is now back at the centre stage of contemporary analytic philosophy. The renewed popularity of Mereology is an aspect of the renewed popularity of metaphysics in general. This also depends on the fact that contemporary metaphysicians often attach great importance to the concepts of existence and identity. Classical Mereology has the ambition to provide existence and identity conditions for every complex entity. This makes Classical Mereology highly interesting for contemporary metaphysicians. 

Philippe: Let’s make a comparison with the discipline of architecture. In architecture, this last trend could be compared to what happened with Christopher Alexander, or before with Mies and then Peter Eisenman. The challenge for me is that I don’t consider Mereology an uninteresting philosophy in architecture, I just see it as a highly modernist theory.
My question is the following. According to you (Giorgio), in the field of philosophy, do you consider Mereology as a modernist philosophical trend or something that has nothing to do with philosophical modernism? Because in architecture, my feeling is that it directly corresponds to a highly modernist attitude, and the fact is that this modernist attitude is highly reductionist. It is defining what is the most elemental aspect of things, so it’s pure reductionism, and it’s still based on some concept of – maybe not order, but at least some attempt at bringing order into things (though sometimes “unpredictable order”).
For me, that is super modernist and my feeling is that we are living in a world built on this reductionist modernity. Right after this reduction – and we already had it in some form a hundred years ago –, let’s say after 1950 we were already going into the opposite direction: an explosion of models… That one is now based on statistical methods, on big data, as related by Mario in his book. So again, I’m not saying Mereology can’t be an important or at least a useful platform for debate, I am just wondering about the inherent nostalgia of going backward in the ordering of reality – in History. Maybe we can – and should – just accept absolute chaos and trillions of trillions of terabytes of data as a fact, without trying to put some order into that. So, my question finally on a purely philosophical level is: do you consider Mereology as modernist, or maybe as a new modern or late modern philosophical theory, or as something which has nothing to do with that?

Giorgio: There is indeed a modernist component in Mereology: the deliberate blindness to structure, which characterises Classical Mereology, is motivated by a form of “taste for desert landscapes”, which in turn might be seen as the outcome of a modernist appetite for order. However, it should also be considered that Classical Mereology includes either as an axiom or as a theorem (according to the way in which Classical Mereology is axiomatised) the principle of Unrestricted Composition, according to which – given some entities, no matter how sparse and gerrymandered they are – they compose something. Due to Unrestricted Composition, Classical Mereology is committed to the existence of all sorts of awkward entities, such as the fusion of my left arm, Barack Obama’s nose and the Great Pyramid of Giza!
On the other hand, a rather “modernist” thesis, which is often associated with Classical Mereology, is the thesis of Composition as Identity. According to the thesis of Composition as Identity, any whole is strictly speaking identical to its parts and is – so to say – no addition to being, with respect to them. This mereological thesis is expected to warrant a form of ontological economy, and can be seen, as a consequence, as the outcome of an appetite for order. 
However, Composition as Identity is not derivable from Classical Mereology, and is a highly problematic thesis in itself. A whole (for example, a chair) and its parts (the four legs, the back and the seat) are mutually discernible, inasmuch as – for example – the chair is one entity, while the four legs, the back and the seat are six entities. If they are discernible (i.e., if they have different properties), then it is not easy to make sense of the claim (entailed by Composition as Identity) that they are identical.

Casey: I think you have covered everything I want to say. Based on this I don’t think there is anything suggestively reductive about composition. I think that it is a ridiculous idea that unrestricted composition suggests that this property could be part of something.
My colleague Daniel is doing the mereological project, but it is certainly nothing reductive. I think it’s more that there is a very explicitness and straightforwardness about the roles and function of the thing, i.e. the function isn’t the exclusive part of the composition, especially according to the kind of lectures we saw today.

Mario: I have a suspicion. I see one main point of this symposium is that in the theory of parts of  today’s computation the parts we are dealing with are new in the history of architecture theory because they don’t need rules of application. These parts are different from Alberti’s or Eisenman’s because for the first time ever in the history of humankind or the history of design we can deal with parts without any rules or orders in them whether it is proportions, fractions, modules, geometrical symmetry, proportional symmetry, etc.
In the history of design, all these tricks and tools were needed to make sense out of parthood. We had to invent structures, like reductionism or data compression, to put some order into the chaos generated by the random accumulation of parts–to make order out of chaos; to manage parts in a “rational”, ie intelligible way: a way that made sense for the limited data-management skills of our own mind.  And now for the first time ever in many practical instances we are getting particles just as they are. We can put them flat on the table and each one of them stands, and that is all that we need. This the nobility of the parts that you’re dealing with. This is the novelty: parts without anthropocentric reduction and human-made intelligibility. 

Casey: Do you say that there are no rules for these parts or is it just that the rules are inherited in the parts and not applied to the total? I’m suspicious of saying that (the former) in dealing with parts. And again, we still have rules because we have generated something that is mereological. There are still rules but the rules are in the parts rather than trying to be imposed on them. And so actually, it is just where the rules are located in the design process. 

Mario: There must be rules of some sort somewhere, but the main difference, and again, I follow my suspicion, we no longer need rules to manage the accumulation of parts beyond the limit of computational (ie machinic) retrieval.  We don’t need to structure them in symmetrical parthood or any other strategy for part retrieval. We always needed some superposition over the structure to reduce the complexity of what was so big that we couldn’t deal with it. Now when dealing with something so big, we can just let the machines deal with it.  The generation process must have some rules somewhere, but my suspicion is these are no longer needed for any practical human purpose. Now we are capable of managing any messy random heap of disconnected parts–because if fact we don’t have to deal with that mess any more: we have machine to do it in our stead.

Emmanuelle: One simple question would be: what kind of parts are we dealing with? Are they not themselves wholes composed of other parts, entering into larger or different wholes? Are we talking solely about human-made parts, which designers can generate, craft and master, or we are considering opening up these wholes to other domains; thus, to what degree and within which limit are they potentially extendable?
You’ll excuse me for coming back to my previous point, regarding the notion of uncertainty and how it can be taken into account, and let’s hypothesise the wholes we consider are governmental ensembles. The researcher in philosophy of law Antoinette Rouvroy identifies how uncertainty and unpredictability are systematically considered as risk. She analyses how the cybernetic and algorithmic order that underlie our contemporary forms of governance attempt to systematically and preemptively tackle risk in order to eradicate it. On the other side, there is a reverse relationship to risk that, against risk management, consists in exploiting it and profiting from it, as you can see in high frequency trading. Risk here appears to be the motor of speculation, it plays with the asymmetric distribution of information within a system.
But if you consider chance, and hence uncertainty and unpredictability, as being not epistemic – as in both aforementioned cases – but objective, and furthermore, if you consider it to be at the source of all life in the biosphere – as Biology Nobel Prize Jacques Monod showed – how can it be taken into account and integrated in the elaboration of hybrid parts and wholes? Embracing this objectivity could allow us to conceptualise a commonality based on an open, decentralised notion of whole that is not subjected to social constructivism.

Giorgio: I owe an answer to Emmanuelle about unpredictability. Unpredictability can be either an epistemic phenomenon (it happens when some human subjects are de facto unable to foresee how things will go, and their inability to do so might be due to their contingent cognitive limitations), or a metaphysical phenomenon (there is metaphysical unpredictability when something is objectively indeterminate, independently of any fact concerning human subjects). If unpredictability is seen as an epistemic phenomenon, then it does not require any modification of Mereology: the fact that some human subjects are unable to determine whether x is part of y has no impact on the circumstance whether objectively x is part of y
The philosophical consequences of quantum indeterminacy are hard to interpret: according to some interpretations, it is indeed a kind of objective, metaphysical indeterminacy. However, as far as I can see, quantum indeterminacy does not concern mereological relations. Thus, it seems to me that neither epistemic nor metaphysical unpredictability have any specific bearing on Mereology.

Daniel: Unpredicted and indeterminant like a good building, it seems to me that Emmanuelle and Giorgio overcame the boundaries of the round table. I would like to use the moment to thank you all for your insights, contributions, and round up the discussion with an open ending.

Suggest a Tag for this Article
Mereologies
Daniel Koehler, 2020
Introduction to Issue 01: Mereologies
Architecture, Architecture Theory, Discrete Architecture, Mereologies, Mereology, Philosophy
Daniel Koehler
University of Texas at Austin
daniel.koehler@utexas.edu
Add to Issue
Read Article: 1570 Words

Part relationships play an important role in architecture. Whether an aspect of a Classical order, a harmonious joining of building components, a representation of space, a partition of spaces, or as a body that separates us and identifies us as individuals. From the very outset, every form of architecture begins with an idea of how parts come together to become a whole and an understanding of how this whole relates to other parts. Architecture first composes a space as a part of a partitioning process well before defining a purpose, and before using any geometry.

The sheer performance of today’s computational power makes it possible to form a world without a whole, without any third party or a third object. Ubiquitous computing fosters peer-to-peer or better part-to-part exchange. It is not surprising then that today’s sharing represents an unfamiliar kind of partiality. From distributive manufacturing to the Internet of Things, new concepts of sharing promise systematic shifts, from mass-customisation to mass-individualisation: the computational enabled participations are foundational. It is no longer the performance or mode of an algorithm that drives change but its participatory capacities. From counting links, to likes, to seats, to rooms: tools for sharing have become omnipresent in our everyday lives. Thus, that which is common is no longer negotiated but computed. New codes – not laws or ideologies – are transforming our cities at a rapid pace, but what kind of parthood is being described? How does one describe something only through its parts today? To what extent do the automated processes of sharing differ from the partitioning of physical space? How can we add, intervene and design such parts through architecture?

The relationship between parts and their whole is called Mereology. In this issue of Prospectives, mereology’s theories and the specifics of part-relations are explored. The differences between parts and the whole, the sharing of machines and their aesthetics, the differences between distributive and collective, their ethical commitments, and the possibilities of building mereologies are discussed in the included articles and interviews.

Just as mereology describes objects from their parts, this issue is partial. It is not a holistic proposal, but a collection of positions. Between philosophy, computation, ecology and architecture, the texts are reminders that mereologies have always been part of architecture. Mereology is broadly a domain that deals with compositional possibilities, relationships between parts. Such an umbrella – analogue to morphology, typology, or topology – is still missing in architecture. Design strategies that depart part-to-part or peer-to-peer are uncommon in architecture, also because there is (almost) no literature that explores these topics for architectural design. This issue hopes to make the extra-disciplinary knowledge of mereology accessible to architects and designers, but also wishes to identify links between distributive approaches in computation, cultural thought and built space.

The contributions gathered here were informed by research and discussions in the Bartlett Prospectives (B-Pro) at The Bartlett School of Architecture, UCL London from 2016 to 2019, culminating in an Open Seminar on mereologies which took place on 24 April 2019 as part of the Prospectives Lecture Series. The contributions are intended as a vehicle to inject foundational topics such as mereology into architectural design discourse.

The Contributions

This compilation starts with Giorgio Lando’s text “Mereology and Structure”. Lando introduces what mereology is for philosophers, and why philosophers discuss mereological theses, as well as disagree one with another about them. His text focuses in particular on the role of structure in mereology outlining that from a formal point of view part relations are freed from structure. He argues that independence from structure might be the identifying link between mereology and architecture. The second article “From Partitioning to Partaking” is a plea for re-thinking the city. Daniel Koehler’s essay points to the differences between virtual and real parts. Koehler observes a new spatial practice of virtual representations that render previous models of urban governance obsolete. He argues that the hyper-dimensional spaces of a big data-driven economy demand a shift from a partitioning practice of governance to more distributed forms of urban design. In “Matter versus Parts: The Immaterialist Basis of Architectural Part-Thinking” Jordi Vivaldi Piera highlights the revival of matter in parallel to the discrete turn in contemporary discourses on experimental architecture. The essay gravitates around the notion of part-thinking in association with the notion of form. Fluctuating between continuous and discrete, the text sets out requirements for radical part-thinking in architecture. As a computational sociologist, David Rozas illustrates the potential of decentralised technologies for democratic processes at the scale of neighborhood communities. After an introduction to models of distributed computation, “Affordances of Decentralised Technologies for Commons-based Governance of Shared Technical Infrastructure” draws analogies to Elinor Ostrom’s principles of commons governance and how those can be computationally translated, turning community governance into fully decentralised autonomous organisations.

Departing from the Corbusian notion of a ‘machine for living’, Sheghaf Abo Saleh defines a machine for thinking. In “When Architecture Thinks! Architectural Compositions as a Mode of Thinking in the Digital Age” Abo Saleh states that the tectonics of a machine that thinks is brutal and rough. As a computational dialogue, she shows how roughness can enable posthumanism which, in her case, turns “tempered” parts into a well-tempered environment. Ziming He’s entry point for “The Ultimate Parts” is the notion of form as the relations between parts and wholes. He’s essay sorts architectural history through a mereological analysis, proposing a new model of part-to-part without wholes. Shivang Bansal’s “Towards a Sympoietic Architecture: Codividual Sympoiesis as an Architectural Model” investigates the potential of sympoiesis. By extending Donna Haraway‘s argument of “tentacular thinking” into architecture, the text shifts focus from object-oriented thinking to parts. Bansal argues for the limits of autopoiesis as a system and conceptualises spatial expressions of sympoiesis as a necessity for an adaptive and networked existence through “continued complex interactions” among parts.

Merging aspects of ‘collective’ and ‘individuality,’ in “Codividual Architecture within Decentralised Autonomous System” Hao Chen Huang proposes a new spatial characteristic that she coins as the “codividual”. Through an architectural analysis of individual and shared building precedents, Huang identifies aspects of buildings that merge shared and private features into physical form. Anthony Alviraz’s paper “Computation Within Codividual Architecture” investigates the history and outlook of computational models into architecture. From discrete to distributed computation, Alviraz speculates on the implications of physical computation where physics interactions overcome the limits of automata thinking. InSynthesizing Hyperumwelten”, Anna Galika transposes the eco-philosophical concept of an HyperObject into a “Hyperumwelt”. While the Hyperobject is a closed whole that cannot be altered, a Hyperumwelt is an open whole that uses objects as its parts. The multiple of a Hyperumwelt offers a shift from one object’s design towards the impact of multiple objects within an environment.

Challenging the notion of discreteness and parts, Peter Eisenman asks in the interview “Big Data and the End of Architecture Being Distant from Power” for a definition of the cultural role of the mereological project. Pointing to close readings of postmodern architecture that were accelerated by the digital project, Eisenman highlights that the demand for a close reading is distanced from the mainstream of power. The discussion asks: ultimately, what can an architecture of mereology critique? The works of Herman Hertzberger are an immense resource on part-thinking. In the interview “Friendly Architecture: In the Footsteps of Structuralism”, Herman Hertzberger explains his principle of accommodation. When building parts turn into accommodating devices, buildings turn into open systems for staging ambiguity.**

The issue concludes with a transcript from the round table discussion at the Mereologies Open Seminar at The Bartlett School of Architecture on 24 April 2019.

Acknowledgments

The contributions evolved within the framework of Bartlett Prospectives (B-Pro) at The Bartlett School of Architecture, UCL. I want to thank Frédéric Migayrou for his vision, commitment and long years of building up a research program, not only by architecture but through computation. I would like to thank Roberto Bottazzi for the years of co-organising the Prospectives Lecture Series, where plenty of the discussions that form the backbone of this issue took place. Thanks to Mario Carpo for raising the right question at the right time for so many people within the program, thanks to Andrew Porter for enabling so many events, to Gilles Retsin, for without the discrete there are no parts, Mollie Claypool for the editing and development of Prospectives journal, and Vera Buehlmann, Luciana Parisi, Alisa Andrasek, Keller Easterling, Matthew Fuller, John Frazer, Philippe Morel, Ludger Hovestadt, Emmanuelle Chiappone-Piriou, Jose Sanchez, Casey Rehm, Tyson Hosmer, and Jordi Vivaldi Piera for discussions and insights. 

I want to thank Rasa Navasaityte, my partner in Research Cluster 17 at B-Pro, for driving the design research. Thank you for the research contributed by the researchers and tutors: Christoph Zimmel, Ziming He, Anqi Su, Sheghaf Abo Saleh, and to all participants, specifically to highlight: Genmao Li, Zixuan Wang, Chen Chen, Qiming Li, Anna Galika, Silu Meng, Ruohan Xu, Junyi Bai, Qiuru Pu, Anthony Alviraz, Shivang Bansal, Hao-Chen Huang, Dongxin Mei, Peiwen Zhan, Mengshi Fu, Ren Wang, Leyla El Sayed Hussein, Zhaoyue Zhang, Yao Chen, and Guangyan Zhu.

The issue includes articles that evolved from thesis reports conducted in the following clusters: Ziming He from Research Cluster 3 tutored by Tyson Hosmer, David Reeves, Octavian Gheorghiu, and Jordi Vivaldi in architecture theory. Sheghaf Abo Saleh, Anthony Alvidrez, Shivang Bansal, Anna Galika, Hao Chen Huang from Research Cluster 17 tutored by Daniel Koehler and Rasa Navasaityte. If not indicated directly, the featured images, graphics of this issue are by Daniel Koehler, 2020.

Suggest a Tag for this Article
Image: Sheghaf Abo Saleh, RC17, The Bartlett School of Architecture, UCL, 2018.
When Architecture Thinks! Architectural Compositions as a Mode of Thinking in the Digital Age
Architecture, Building, Environmental Design, Mereologies, Mereology
Sheghaf Abo Saleh
University College London
s.saleh.17@alumni.ucl.ac.uk
Add to Issue
Read Article: 2490 Words

“One must turn the task of thinking into a mode of education in how to think.”[1]

These words from the philosopher Martin Heidegger point towards new modes of thinking. As architects, one can mention Mario Carpo’s remark about the huge amounts of data that are available for everyone nowadays: most of it is underused.[2] As this essay will argue, this new condition of Big Data, and the digital tools used to comprehend and utilise it, can trigger an entirely new way of thinking about architecture. It is a way to both open doors for testing, and an opportunity to look back into history and re-evaluate certain moments in new ways. As an example one can take Brutalism, which emerged as a post-war period solution in the 1950s. It was a new mode of thinking about architecture that was influenced by Le Corbusier’s Unité d’Habitation de Grandeur Conforme, Marseilles (1948–54), the Industrial Revolution and the age of the mechanical machine. Brutalism can be read as the representation of reading the building as a machine at that time. Luciana Parisi has expanded on this idea, writing that Brutalism can be considered as the start of thinking about architecture as a digital machine, having removed any notion of emotion from the architectural product, leaving a rough mass of materials and inhabitable structures.[3] In Parisi’s sense, brutal architecture can then be read as a discrete system of autonomous architectural parts brought together with a set of rules: symmetry, asymmetry, scales, proportions, harmony, etc. These rules, materials and structures act autonomously using collective behaviours to produce data. The data can then be translated into concrete compositional elements which form a building, a city or a whole territory. The adjacencies between each discrete compositional element creates the relations between those parts.

Figure 1 – Thinking parts interacting to produce a building. Image: Espen Dietrichson, Hard Edges, Cloudy Cities, Galleri Haaken, 2018.

The Building Thinking Machine  

The building as a machine departs from Le Corbusier’s claim for a functional architecture.[4] Today, the use of machine learning and artificial intelligence means that machines are no longer used only for making. They are thinking machines.[5] This allows a new translation of Le Corbusier’s understanding of function, asking the questions: what if architecture acts as a mode of thinking? How would a building as a thinking machine perform? 

The generation following Le Corbusier progressed the building machine. Regner Banham linked the building machine to comfort and the environment,[6] seeing the building as a kit of tools that provide comfort. In other studies, Reyner Banham proposed the building as a package which is totally enclosed and isolated from the external environment, referring to this as “the environmental bubble”. He proposed that surrounding the building with one thick layer that protected the internal space was the best solution to provide a well-tempered environment. Yet Banham presents a clear separation between the interior and exterior spaces which no longer matches the complexity of interior-exterior relationships at both urban and architectural scales. 

Mereological Reading of Architectural Precedents

Different types of systems that provide for a well-tempered environment inside the building distinguish difference between inside and outside as the difference between a well- and non-tempered environment. Mereology, or the study of parts-relations,[7] can be used as a methodology to read a building in terms of its compositional aspects. 

One historical example is the Rasoulian House (1904) which was designed to provide a state of comfort for its users throughout the year. A basic architectural element known as the wind catcher tower, or Malqaf, provided the building with breeze. As Sarinaz Suleiman described, the Malqaf is a composition of architectural elements that work together to create air flow. These elements include walls, doors, rooms and include the basement and the courtyard, organised in a specific order, proportions and orientation to create specific relationships between the inside and the outside.[8] 

The Malgaf is the first point at which air flow enters the building. It then travels down a shaft which is the first interior space that the wind interacts with. The air continues to a second interior space through a window-like opening into a room, and then is moved through an opening in the room’s floor to a cellar space under the building. This third interior space is the coolest space in the building. The cellar is connected to the courtyard through an opening that facilitates air circulation and absorbs wind. For this to happen, two kinds of relationships need to exist: the exterior relation formed by the geometry of two elements, e.g. the height of the Malqaf and the width of the courtyard which help to create a high difference in air pressure, and an internal relation which is controlled by the openings between the interior spaces and between interior and exterior spaces as well. Ventilation is not only a void space, but another level of interiority inside the building. 

Figure 2 – An architectural precedent for ventilation, Rasolian house in Yazd city, Iran, 19th century. Image: Sheghaf Abo Saleh, 2020.

Another example of a complex ventilation system is a data centre building.[9] Data centres usually produce vast thermal exhaust which requires constant air movement, requiring large depths to ceilings and floors which may be as big as the building itself. Servers are positioned in the room with a certain distance between each other. This distance is related to the degree of temperature and the air circulation speed. Higher temperatures inside the room are used to decrease air pressure and create a pressure difference that enables air circulation naturally in the room. The path that the air travels allows the air the time it needs to cool down naturally.

Computational Ventilation

Hundreds of years ago, Vitruvius described wind, saying that “wind is a flowing wave of air with an excess of irregular movements. It is produced when heat collides with moisture, and the shock of the crash expels its force in a gust of air.”[10] Vitruvius’ definition can be deconstructed into two parts, the first of which deals with the dominant wind direction and its relation to the outer envelope of the building. This concept was emphasised by Vitruvius’ example of the Octagon Marble Tower (15th century BC). The second part relates to the process of creating wind flow in nature. Vitruvius explains that air circulation occurs when two different air pressures encounter each other. The difference in the air pressure always happens as a result of changes in temperature and moisture. High temperature heats up the air causing low density and consequently low pressure areas, and lower temperature will create a high pressure area. This concept is the logic that has been followed in all passive ventilation systems throughout history. These systems tend to create two points with a high difference in pressure, connecting these two points with a path that needs to be ventilated. This path would then move through the building accelerating air movement from the high pressure area to the low pressure area creating air flow inside the building.

Figure 3 – The Octagon Marble Tower, Athens, Greece. Image: Included in Vitruvius’ Ten Books on Architecture, 15th century BC.

A traditional building from the Middle East can be taken as a case study for applying thermodynamic logic to create natural air circulation in a building. In the previous example of  Rasolian House, the side that is exposed to the sun is heated up by the sun. Consequently, air pressure decreases. The geometry that is exposed to the sun creates shadowed areas inside and outside of the building. These shadowed areas are much colder and have high air pressure. Air circulates from the high pressure to low pressure areas. That means air can move from a cooler courtyard to an upper space located above it. This air movement absorbs the air from inside the building to fill in the void in the courtyard that the high pressure air had left behind when it moves upwards. Due to the opening at the top of the shaft, air will enter the building to fill in the void that the inner air has left behind as well. This air replacement creates the air circulation inside the building. The creation of wind is dependent on the design of the inner space and its relation to the outer space through openings. This means that, by closing and allowing openings, wind can be created or stopped, and by changing some openings, the wind flow path can be changed, and wind speed could increase or decrease. This follows a logic of discrete, combinatorial air flow.

Figure 4 – Section in a wind catcher shows the path of the wind as a binary system controlled by switching circuits. Image: Sheghaf Abo Saleh, RC17, The Bartlett School of Architecture, UCL, 2018.

Computation Ventilation on the Urban and Architectural Scale

The building can be seen as a machine for creating an environmental condition through compositional thinking. This way of thinking turns the building, in the case of the Malgarf, into a switch that can turn the air flow on and off.  In this instance, the creation of wind is entirely dependent on a series of elements that are well- organised and ordered. From this combinatorial thinking, wind can be read as a form of pre-digital computation considering the inside-outside sequences as what causes the air flow.

The order of inside-outside also plays an important role in disrupting air flow. A single element that has been extracted from a building can serve as an example. It is a corridor, but at the same time this element plays a crucial role in creating wind. The way that the walls are arranged produces a contrast between the inside-outside spaces. Moreover, the design and arrangement of the openings turns the corridor into a path for air. Taking this element as a discrete part, and rearranging its parts within the same local rules that have been set over the ventilation logic, another version of the element emerges. Following this same logic would give different versions of different elements. Further on, each version of each element has its discreteness and can be upscaled. With this upscaling strategy, more complex interiors emerge.

Figure 5 – (clockwise, from upper left) Extracted element from a church in Finland; new version of the element with mereological changes; arrangement of ten elements combined in a way that ensures the wind flow runs through the whole system; the path that wind flow draws. Image: Sheghaf Abo Saleh, RC17, The Bartlett School of Architecture, UCL, 2018.

By integrating an environmental aspect within the design process, a new type of building that embraces another wind geometry can be created. This provides an opportunity to design highly dense architectural forms that can reassure the qualities of the internal space. By nesting interiors one can create different low and high pressure areas over inside-outside sequences.

Figure 6 – Different arrangements of [-][+] situations and what they create as wind patterns, the discreteness of the wind flow, wind-geometry. Image: Sheghaf Abo Saleh, RC17, The Bartlett School of Architecture, UCL, 2018.

This allows a rethinking of the inside-outside arrangement in the city according to what positive or negative sequences are created. For example, for more similar interiors less contrast in air pressure needs to be produced. For more variation between the interiors, the contrast in the air pressure needs to increase and more air will flow. An air circulation concept can be used as a means to arrange both interior and exterior spaces in the building and in the city.

Figure 7 – A range of building fragments with the same number of elements but different [-][+] sequences, by more to less wind flow. Image: Sheghaf Abo Saleh, RC17, The Bartlett School of Architecture, UCL, 2018.

Achieving Banham’s Campfire

At an architectural scale the interior-exterior relation can also be managed by the building façade. The façade tends to be used to provide separation between indoor and outdoor spaces as well as between a tempered and non-tempered environment in order to achieve comfort. However, a new understanding of wind circulation can provide a well-tempered environment regardless of the façade. In other words, façade here can be seen as the tools or the elements that provide comfort and facilitate air circulation inside the building.

A façade needs to meet specific criteria in order to generate a difference in air pressure just like the inside-outside arrangement in the city scale. Three design parameters can support this: the orientation of the elevation in relation to the sun, the number of layers that are needed to create more or less tempered areas and the degree of translucency of the façade that helps to prevent or allow sunlight which helps in its turn to reach the preferred temperature. The facade is not any more the envelope of the building, it is the layers that are responsible for providing the comfort inside the building.

Figure 8 – Building fragments. Image: Sheghaf Abo Saleh, RC17, The Bartlett School of Architecture, UCL, 2018.

Indeed, thinking about architecture through architecture’s interiors can expose low-tech computation that starts from a thermodynamic discreteness. This enables the understanding of spatial sequence that can support different levels of space in a building and the notion of layers of building-in-buildings. If this concept is upscaled to the scale of the city it could be an opportunity to study the kinds of patterns that mereology can create utilising environmental thinking. This means that a building, or even a city, could become an example of the campfire that Banham aimed to reach many years ago.[11]

Figure 10 – Building fragment implemented with the façade concept. Image: Sheghaf Abo Saleh, RC17, The Bartlett School of Architecture, UCL, 2018.
Figure 11 – Building fragment implemented with the façade concept. Image: Sheghaf Abo Saleh, RC17, The Bartlett School of Architecture, UCL, 2018.

References

[1] M. Heidegger, The End of Philosophy, trans. Joan Stambaugh (Cambridge University Press, 2003).

[2] M. Carpo, The Second Digital Turn: Design Beyond Intelligence, (Cambridge, Massachusetts: MIT Press, 2017).

[3] L. Parisi, “Reprogramming Decisionism,” e-flux, 85 (2017).

[4] Le Corbusier, Eyes That Do Not See  in Towards a New Architecture, (London: The Architectural Press, 1927), 107.

[5] M. Carpo, “Excessive Resolution: Artificial Intelligence and Machine Learning in Architectural Design,” Journal of Architectural Record (2018), https://www.architecturalrecord.com/articles/13465-excessive-resolution-artificial-intelligence-and-machine-learning-in-architectural-design, last accessed 3 May 2019.

[6] R. Banham, “Machines A habiter”, The Architecture of the Well-tempered Environment, (Chicago: Chicago Press, 1969).

[7] A. Varzi, “Mereology Then and Now”, Journal of Logic and Logical Philosophy, 24 (2015), 409-427.