Search
Mailing List
Back to Top
Issue 2
14/05/2022
ISSN 2634-8578
Curated By:
Alessandro Bava
Add to Basket
Share →
Solarpunk Building for Terraforma, Alessandro Bava, 2021
Solarpunk Building for Terraforma, Alessandro Bava, 2021
Editorial Note
29/04/2022
B-pro, Editorial Note, Prospectives, Prospectives Issue 02, The Algorithmic form, The Bartlett
Provides Ng

provides.ng.19@ucl.ac.uk
Add to Issue
Read Article: 1667 Words

Welcome to Prospectives Issue 02

It’s been a great pleasure to be part of Prospectives – a journal that is dedicated to all researchers and designers, students and scholars, established or in their early careers. It aims to act as a hotbed, a sandbox, a platform that is “from architects, by architects, to architects” in its broadest sense – be it architects of buildings, softwares, or future(s) (or the Matrix!). It is for all who are invested in interdisciplinary and intercultural exchanges, information and idea seeding. 

According to Oxford Languages, the term “Prospective” emerged in the late 16th century, with a meaning of “looking forward, foresighting”, or “characterised by looking to the future”. The journal’s title puts the anticipatory nature of Prospective(s) into plural form; we believe “design” is the maximising of options or, as Claude Shannon put it “surprises” in a system; and the realisation of design is the collapse / negotiation / collaboration of all such possibilities into our physical reality. When the word “prospect” is translated into other languages, like my mother tongue Chinese, it adds yet another layer of meaning. The first result that Google turned up was “奔頭兒” (rushing-heads), an expression much used by local dialects in the North-East of China to describe the hard work needed to secure a promising future. Different languages and cultures map the vibrancy of Prospectives, and also of architecture and world-building. One is simultaneously enabled and constrained by the language which structures our thinking, be it architectural, mathematical or natural languages; this is why collaboration, or a collaborative intelligence, is our biggest prospect. The greatest innovations are the ones characterised by inclusivity, not exclusivity.  

Within such a context, what is the role of a journal? To ensure standards in research? To network scholars in the field? To communicate progress with the larger public? We have seen an increasing number of open source journals that are revolutionising the peer review system; not to replace it, but diversifying what can be meant by peer-to-peer (p2p). At Prospectives, we are invested in democratisation, especially in helping independent authors and designers reach a larger audience, and making literature available and accessible to all through participation and digitalisation. The future of journals (and architecture), is certainly one that can synthesise the copyrights and “copylefts”. As Prof. Mario Carpo suggests, while the marginal costs of printing (be it 2D or 3D) decrease, our capacities in mass customisation increase, and the same applies to information production. With the rise of the Omniverse, Metaverse, and MetaNets, it becomes increasingly apparent that the answer is not in the technologies themselves, but the way the social and the economic are re-structured, driven by participatory innovation. It will take the invisible (or visible) hands of the many to steer us towards the prospectives we desire.  

Issue 02: The Algorithmic form  

“Algorithm” as the adjective, “form” as the subject – connecting fundamental questions in computation to architecture. The second issue of Prospectives is driven by the provocations of the essay “Computational Tendencies”, written in 2020 by Alessandro Bava – who is also the guest curator of this issue. He problematised evolutionary thinking in architecture – the linear and unidirectional development from simplicity to complexity, from causation to correlation, from small to big data – and questioned the prospects of algorithms and forms within social and cultural urgencies. In the search for answers that are likely to fall between established fields, Alessandro invited six architects to engage in conversation with great figures from the fields of art, architecture and computation. Some of these conversations are carried out through interviews and roundtables, others through research, literature and case studies, forming dialogues between the past and present. Together with this, an open call was established to crowd-source intelligence and outsource imagination. These critical and retrospective pieces map a speculative timeline of events around “algorithmic forms” from Italian Renaissance, through the beginning of modernism, up to today.  

Prospectives Issue 02 encompasses 14 contributions. Prof. Mario Carpo starts our journey with an analogy of the German language, where grammar is “an artificial shortcut” to fluency, not its entirety. The same logic may apply to “Shape Grammar” in architecture, or the Common Data Environments of BIM, or the big-databases of Artificial Intelligence (AI). Just as he exquisitely formed a connection between the invention of book-printing and 3D-printing to predict a future of mass customisation, in this piece Mario shows us a comparative history between citationists of the Renaissance and post-modern (PoMo) architecture. The former is invested in reviving classical antiquity “piece-by-piece”, while the latter took its cues from “reference, allusion, collage and cut-and-paste”. We are also indulged with the distinguished curator Hans Ulrich Obrist’s interview with Getulio Alviani – an important figure in the international Optical-kinetic art movement throughout the 20th century. Alviani spoke of being motivated by the work of Leonardo Da Vinci; his geometric exploration arising from the “curiosity of seeing”; the tectonics between material and structure, craft and design, and finally, the immersivity of movement with the “discovery of light”. This precious and poetic piece teleported us to the Italian art scene through Alviani’s encounters, provoking us to reflect on our journey from simplicity to complexity. 

The five pieces that follow are the outcome of the B-pro Open Seminar at the Bartlett School of Architecture on 8th December, 2021. Five invited guests, including Roberto Bottazzi (The Bartlett), Francesca Gagliardi and Federico Rossi (Fondamenta), Philippe Morel (ENSA Paris-Malaquais & The Bartlett), Marco Vanucci (Open Systems), and myself (Provides Ng, The Bartlett) were invited to contemplate on and discuss the work of Luigi Moretti, Isa Genzken, Manfred Mohr, and Leonardo and Laura Mosso – important figures who had shown us new forms of aesthetics through the exploration of novel technological, geometrical, and mathematical tools. The roundtable that followed included discussions on, but not limited to, topics in Building Information Modelling (BIM), AI, blockchain, robotics, extended reality (XR) and other distributive technologies that, undeniably, should be brought to the table for their symbiosis and socioeconomic implications, positive or negative.  

Lastly, the richness of this issue is further complemented by five selected open call pieces, with topics ranging from architectural authorship, algorithmic representations, digital anthropology, computational empiricism, and the liberation of creativity through codification.  

Acknowledgements

Prospectives hopes to uncover the urgency around issues of computation and automation within the built environment, but also the communities and initiatives that are involved in such developments; from the Bartlett School of Architecture, UCL, reaching out to wider society across disciplinary and territorial borders.  

First and foremost, I owe thanks to Prof. Frederic Migayrou, who is chair of the school, director and founder of the B-pro – five exciting programs led by an international and interdisciplinary team of faculty members, which have shown the field diverse paths to architecture and education, a shelter for all who strive for “prospects”. And to Prof. Mario Carpo, a historian, a critic, a theorist, who has liberated my thinking and shown us a form of architecture that is so much more than design; a form of architect that is so much more than a builder; a form of speculation that is so much more than fiction; a form of prospect that is so much more than futuring. Mario and Frederic were my supervisors, patiently guiding me through a marvellous history of Architecture & Digital Theory; a history that has become a rock in my heart – even though the prospects of the future are not always clear, history has prevented me from confusing and losing myself, and urged me to write and research with honesty, and I hope this journal can do the same for its readers. And of course, Mollie Claypool, a dedicated advocate, a female theorist, my role model. A strong figure with a soft heart, she will always fight and speak up for, in her words, “a labour of love and perseverance”, spearheading participatory and collaborative practises in automation, design and research, and the launch of this very journal. Also Roberto Bottazzi and Gilles Retsin, programme directors of Urban Design (UD) and Architecture Design (AD) in B-pro, together with Mollie, have given me so much opportunity, trust, advice and support, facilitating a free platform of architectural expression and a warm hub of design innovation. Prof. Bob Sheil and Andrew Porter, who have relentlessly endorsed and formalised the development of Prospectives and all other initiatives within the School of Architecture, facilitating a welcoming hotbed for creativity, self-initiation and self-organisation.  

I am thankful to all those who are my colleagues, but also my mentors, including Alessandro Bava, who have curated this issue with much sincerity and commitment, bringing an amazing line up of guests and design provocations to the table; Déborah López Lobat, Hadin Charbel, Manuel Jimenez, Emmanouil Zaroukas, Clara Jaschke, Mark Garcia, Jordi Vivaldi Piera and Albert Brenchat-Aguilar, with whom I’ve had some of the most engaging and interesting disciplinary discussions and who have never hesitated to reach out a helping hand; Daniel Koehler, Valentina Soana, and all Prospectives advisory board members. Above all, Alberto Fernandez Gonzalez and David Doria; my strongest backers, my faithful ear, my collaborative hands, my motivation and my exemplars, it is my honour and blessing to be amongst such fellowship and companionship. Needless to say, we would be nothing without our communication and administration teams, the invisible heroes who have supported the running of the school, especially Drew Pessoa, Tom Mole, Ruth Evison, Gen Williams, Srijana Gurung, Abi Luter, Dragana Krsic, Sarah Barry, Jessica Buckmire, Julia Samuels, and Crystal Tung. Last but not least, Rebecca Sainsot and Dan Wheeler, who assisted the publication and copy editing of this issue with such dedication, and to those who have submitted and contributed to our open call. I am grateful to all schools of architecture, like the Bartlett, that have enabled and facilitated projects such as Prospectives, opportunities for early-career and independent scholars, and a place for aspiring talents to meet and grow.  

Suggest a Tag for this Article
algorithmic form, 2021
algorithmic form, 2021
Introduction to Issue 02: Algorithmic Form
Algorithmic Form, Architecture, Architecture Theory, curatorial note, Philosophy
alessandro bava

thealessandrobava@gmail.com
Add to Issue
Read Article: 640 Words

I was asked by Mollie Claypool to curate the second issue of Prospectives Journal as an ideal follow up to leading Research Cluster 0 at B-Pro in the academic year 2020/21. As such, this issue is a collection of positions that respond to my research interest during that year. 

In fact, my initial objective with RC0 was to research ways of applying computational tools to housing design for high-rise typologies: the aim was to update modernist housing standardisation derived from well-established rationalist design methodologies based on statistical reduction (such as in the work of Alexander Klein and Ernst Neufert), with the computational tools available to us now.

While the outcomes of this research were indeed interesting I was left with a sense of dissatisfaction, because it was very difficult to achieve architectural quality using purely computational tools – in a sense I felt that this attempt at upgrading modernist standardisation via computation didn’t guarantee better quality results per se, beyond merely complexifying housing typology and offering a wider variety of spatial configurations. 

In an essay I published in 2019 (which in many ways inspired the curation of this Journal), I declared my interest to be in the use of computational tools not for the sake of complexity – formal or programmatic – but for increasing architectural quality, while decrying that the positions expressed by the so-called first and second digital revolutions, at the level of aesthetics at least, seemed too invested in their own self-proclaimed novelty. My interest was in rooting them in a historical continuum, with established architectural methodologies; seeing computational design as an evolution of rationalism. 

This is why I wanted this journal to be about architectural form, and not about technical aspects of computational design: there is an urgent need to discuss design traditions connected to computational design, as an inquiry on “best practices” – that is, historical cases of what an algorithmic form has been and can be. 

Any discussion on architecture implies a twin focus, on the one hand, on the technical aspects of construction and the tools of design, and on the other, on how these are interpreted and sublimated by the artistic sensibility of an author. Ultimately, what’s interesting about architecture as the discipline of constructing the human habitat is how it is capable of producing a beautiful outcome; and in architecture, perhaps more than any other practice, the definition of beauty is collective. To be able to establish what’s beautiful, we need to develop common hermeneutic tools, which – much like in art – must be rooted in history. 

In light of this, I’m delighted with the contributions to this Journal, which offer a concise array of historical and contemporary positions that can help construct such tools. Many of the essays presented here offer a much needed insight into overlooked pioneers of algorithmic form, while others help us root contemporary positions in an historical framework – thus doing that work necessary for any serious discipline, technical or artistic, of weaving the present with the past.

My hope is that those individuals or academic institutions who are interested in how we can use emerging computational tools for architecture can re-centre their work not just on tooling and technical research but on architectural form, as the result of good old composition and proportion. The time is ripe, in my view, for bridging the gap between computational fundamentalists who believe in the primacy of code, and those with more conservative positions who foreground good form as the result of the intuition and inclination of a human author, remembering that an architectural form is only interesting if it advances the quality of life of its inhabitants and continues to evolve our collective definitions of beauty.  

algorithmic form, 2021
algorithmic form, 2021
Suggest a Tag for this Article
Sebastiano Serlio, Livre Extraordinaire de Architecture [...] (Lyon: Jean de Tournes, 1551), plate 18, detail
Sebastiano Serlio, Livre Extraordinaire de Architecture […] (Lyon: Jean de Tournes, 1551), plate 18, detail
Citations, Method, and the Archaeology of Collage *
algorithm, alphabet, architectural language, Citations, Collage, Method, pomo, post modern, Renaissance, shape Grammar
Mario Carpo

m.carpo@ucl.ac.uk
Add to Issue
Read Article: 3651 Words

But let us not have recourse to books for principles which may be found within ourselves. What have we to do with the idle disputes of philosophers concerning virtue and happiness? Let us rather employ that time in being virtuous and happy which others waste in fruitless enquiries after the means: let us rather imitate great examples, than busy ourselves with systems and opinions.  … For this reason, my lovely scholar, changing my precepts into examples, I shall give you no other definitions of virtue than the pictures of virtuous men; nor other rules for writing well, than books which are well written.  

Jean-Jacques Rousseau, Julie ou la Nouvelle Héloïse, Letter XII (William Kenrick transl., 1784)  

Children learn to speak their mother tongues through practice and observation. They don’t need grammar rules. Grammar comes later, when it is taught at school. This shows that we may know a language without knowing its grammar. Grammar is an artificial shortcut to fluency, replacing the lengthy process of learning from life. For a fifteen-year-old high school student struggling to learn German, grammar is indispensable. Yet plenty of native German speakers don’t know declensions by heart and still manage to get their word endings right – in speech as much as in writing.

At a higher level of linguistic practice, literary composition too used to have its own rules – rules that were taught at school. Until the end of the nineteenth century rhetoric was a compulsory subject in most European secondary schools. Rhetoric is the science of discourse. It teaches how to find the arguments of speech, how to arrange them in an orderly manner, and how to dress them with words. Rhetoric teaches how to be clear and persuasive. Seen in this light, rhetoric would seem to be a necessary discipline – indispensable, even. Instead, it no longer features in school and university curricula. France stopped teaching rhetoric in 1885, when French lycées replaced it with the history of classic and modern literature. Nineteenth-century educators seemed to have concluded that, when learning to write, we are better off in the company of literary masterpieces, rather than engaged in the normative study of classical (or modern) rhetoric. A century after Rousseau, Julie-Héloïse’s pedagogical programme quoted above became law.

In times gone by students would have learnt the art of discourse by systematically studying grammar and rhetoric – page after page of rules to be learnt by heart. Today high school students in all European countries are instead obliged to read the masterpieces of their respective national literatures, often ad nauseam. This evidently follows from the assumption that, by reading and re-reading these exemplary works, students will (at some point) learn to write as beautifully as these canonical authors once did. Never mind that nobody knows precisely how and when that almost magic transference, assimilation, and transmutation of talent might occur: grammar has almost completely disappeared from primary school teaching, and rhetoric barely features in higher education – now an intellectual fossil of sorts. Meanwhile, the old art of discourse tacitly lingers on, in business schools, in creative writing and marketing classes. Especially in the latter, the ancient forensic discipline is returned to one of its ancestral functions: that of persuading, even when in the wrong.

For the Humanists of the Quattrocento, the first language to learn was Latin. Not Medieval Latin of course – a corrupt and barbaric but still living language. Renaissance Humanists wanted to speak in the tongue of classical antiquity; they wanted to learn Cicero’s Latin. But Cicero’s Latin is, by definition, a dead language: quite literally so, since it died with Cicero. Cicero also wrote manuals on the art of rhetoric, but the Humanists believed that the best way to learn to write like Cicero was by imitating his way of writing. Well before the Romantics and the Moderns, they found learning from rules unappealing. They preferred to copy the style of Cicero from examples of his work.

The Humanists’ veneration of examples was not limited to languages. Their exemplarism was an épistémè – an intellectual, cultural and social paradigm, deeply inscribed within the spirit of their time. That was their rebellion against the world they grew up in. For centuries the Scholastic tradition had privileged formalism, deductive reasoning, and syllogistic demonstration. The Humanists rejected this “barbarous”, “Gothic” tradition of logic, in favour of their new way of “learning from examples”. The dry and abstract rules of medieval Scholasticism were difficult to handle. Examples, on the other hand, were concrete and tangible. Imitating an example was easier, more pleasurable, and allowed more room for creativity than merely applying rules. This is how, at the dawn of modernity, antiquity was turned from a rule book into an art gallery.

*** *** ***

Like the arts of discourse, the arts of building require schooling. At the height of the Middle Ages, when both Gothic architecture and Scholasticism were at their peak, architectural lore was the preserve of guilds, and its mostly oral transmission was regulated by secretive initiation practices. By contrast, the Humanists pursued a more open strategy – reviving the ancient custom of writing books on building. The first modern treatise, Alberti’s De Re Aedificatoria, deals with the architecture of antiquity, but the structure of Alberti’s discourse was still medieval and Scholastic. Alberti advocates classical architecture as a paragon for all modern building, but Alberti’s antiquity was an abstract model, devoid of any material, visible incarnation. Rather than an atlas of classical buildings, Alberti’s book offers a set of classical design rules – rules for building in the classical way. To put it in more contemporary terms, Alberti formalized classical architecture. Alberti’s rules replace the need to see – let alone imitate – the monuments of classical antiquity. To avoid all misunderstanding, Alberti’s book did not describe any actual ancient monument, either in writing or visually: Alberti’s De Re Aedificatoria originally did not include any illustrations, and Alberti explained that he wanted it that way.

As a commercial venture, Alberti’s De Re Aedificatoria was not a success. Renaissance architects found it easier to skip Alberti’s writings altogether, and go see, touch and learn from the extant magnificence of Roman ruins in person. Moreover, and crucially, as of the early sixteenth century drawings of ancient monuments started to be sold and circulated throughout Europe. Survey drawings in particular, for the first time made available through print, made the laborious ekphrastic and normative mediation of Alberti’s writings all but unnecessary. But models, if beautiful to behold, are not always easy to imitate. Copies will inevitably be more or less successful, depending on the individual talent of each practitioner. By the second or third decade of the sixteenth century imitation itself had become a pedagogic and didactic conundrum.

Not just architectural imitation: writers had the same problem. After all, imitating Cicero is easier said than done. Many rhetoricians in the sixteenth century will strive to transform the practice, skills, and tacit knowledge of literary imitation into a rational, transmissible technique. The modern notion of “method” was born out of sixteenth century rhetoric, but sixteenth century authors were not trying to develop a (scientific) method for making new discoveries; they were trying to develop a (pedagogic) method to better organise and teach what they already knew. Their post-Scholastic, pre-scientific method was essentially a diairetic method – a method of division: all knowledge, they argued, can be partitioned into smaller and smaller units, easier to learn, remember and work with. For sixteenth century scholars, “method” still meant “short cut” – a short cut to knowledge.

Discourse itself can be divided into modular parts: prefaces, arguments, conclusions, formulas and figures, idioms or turns of phrase, sentences, syntagms, words and letters. Sixteenth-century rhetoricians used this divisive technique to invent a new method for literary imitation. On the face of it, Cicero’s style may appear as an ineffable quintessence, but at the end of the day all writing is text, and every text can be broken down into a linear sequence of alphabetical units. Of course, breaking up a text is not a straightforward operation: the parts of speech are held together by syntactic, semantic, and functional relationships. Some of these links can be uncoupled. Others can’t. A text is a heteroclitic, variable cohesion aggregate of parts. Its segments differ in both extension and complexity. Yet even the most sophisticated literary monument can be subdivided into fragments; and once a fragment has been set apart from its compositional context, it can also be reused, reassembled, or recomposed into another text.

In reducing the art of discourse to a citationist technique – by turning ancient texts into a repository of infinitely repeatable citations – sixteenth century rhetoricians invented a new rhetoric. Ancient and modern texts came to be seen as mechanical assemblages of parts. Ancient works could be decomposed into segments, and these segments could then be reassembled to form new works. The smaller the segments, the more fluid or freer the outcome. Ciceronian Latin was an extraordinarily sophisticated and effective instrument of communication, but some modern ideas fundamentally differed from those of Cicero. The citationist method of imitation allowed Renaissance authors to use an old language to express new ideas.

Renaissance architects also needed a rational method for producing modern buildings while imitating classical examples. The greatest structures of antiquity – temples, amphitheatres, thermal baths – were of no use to modernity. Temples, in particular, while representing the pinnacle of classical architecture, had been built to house rituals and represent heathen gods whose worship had long ceased. The entire language of classical architecture had to be adapted for typologies and functions that had no precedents in antiquity. The image of antiquity itself as a building that can be endlessly dismantled and reassembled was a commonplace in the Renaissance. It was also a common practice on many building sites. Architect Sebastiano Serlio would turn this practice into a design theory.

That was no accident. Giulio Camillo, one of the main theorists of the sixteenth century citationist method, had an interest in architecture. He was also a friend of Serlio. The two were supported by the same patrons, and moved in the same circles of Evangelical (and perhaps Nicodemite) inclination. The method of Giulio Camillo’s Neoplatonist rhetoric is well known:

1. Appropriate ancient examples (literary or otherwise) must be selected. The criteria for this selection were a much-disputed matter at the time, and one on which Camillo himself did not dwell.

2. The resulting corpus of integral textual sources must be segmented or divided into parts according to functional or syntactical criteria.

3. This catalogue of dissolved fragments must be sorted, so new users know where to look for the fragments they need.

4. A modern writer (a composer, but also in a sense a compositor: an ideal type-setter) will pick, reassemble and merge, somehow, any number of chosen textual fragments.

Thus new ideas could be expressed through ancient words and phrases – fragments severed from their original context, yet validated by prior use by a recognised “authority”. In Camillo’s view, this compositional technique constituted the inner workings and the secret formula of all processes of imitation. Furthermore, this was a compositional method that could be taught and learnt.

One essential tool in implementing this pedagogical programme was Camillo’s notorious Memory Theatre, a walk-in filing cabinet where all the textual sources (and possibly some of the fragments deriving from them) would have been sorted following Camillo’s own classification system. The whole machine, which included an ingenious information retrieval device, would have been in the shape of an ancient theatre – and it appears that Camillo built at least a wooden model or mock-up of it, in the hope (soon dashed) of selling his precociously cybernetic technology to King Francis I of France.

In a long-lost manuscript (found and published only in 1983) Camillo also explains how the same principles can inform a new method for architectural design. In Camillo’s Neoplatonic hierarchy of ideas, the heavenly logos descends down into reality following seven steps or degrees of ideality. Individuals inhabit the seventh (lowest, sublunar) step; their ascent and crossing of the lunar sky occurs by dint of their separation from the accidents of space and time. In the case of architecture, actual buildings as they exist on earth must be separated from their site to become ideas of the lowest (sixth) grade. This separation of the real from its worldly context results in something similar to what we would today call “building types” – which are buildings in full, except they do not inhabit any given place. These abstract types are then further subdivided into columns and orders (of the five kinds then known: Tuscan, Doric, Ionic, Corinthian, and Composite). The five orders are then broken down into regular geometric volumes, then surfaces, all the way to Euclidian points and lines. On each grade or step, a catalogue of ready-made parts would offer any designer all the components needed to assemble a new building. Thus Camillo’s design method doubles as a shortcut to architectural imitation, and as a universal assembly kit.

A more scholarly trained Neoplatonist philosopher (and a few existed in Camillo’s time) would have objected to some of Camillo’s brutal simplifications, and could have pointed out that his theory had severe epistemic flaws. All the same, Camillo’s architectural method (which its first editor, Lina Bolzoni, dated to around 1530) is almost identical to the plan laid out by Serlio in the introduction to the first instalment of his architectural treatise, published in Venice in 1537. Some of Serlio’s seven grades did not correspond to Camillo’s order: most notably, his atlas of archaeological evidence, the base and foundation of Camillo’s Neoplatonic scaffolding, should have been on the lowest step, but was instead printed as Serlio’s Third Book (likely for commercial reasons). Additionally, one of the seven books in Serlio’s original plan, his revolutionary Sixth Book, on Dwellings for all Grades of Men, was written but never published – at least, not until 1966. Serlio also wrote an additional, Extraordinary Book (literally, a book out of the original order) – a cruel, sombre joke disguised as a book, which Serlio bequeathed to posterity shortly before dying, poor and dejected in his self-imposed French exile.

Regardless of some factual discrepancies, Serlio’s compositional method is ostensibly the same as Camillo’s. Architecture’s exemplary models are selected, and then fragmented. These fragments are sorted and classified at different levels or grades of dissolution. Instructions for their reassembly are then provided, together with examples of successful new compositions. The pivot of the whole system was the book on the five architectural orders, which Serlio published first (albeit titled Fourth Book to comply with the general plan): a catalogue of stand-alone constructive parts (columns, capitals, bases, entablatures and mouldings), destined for identical reproduction in print, in scaled drawings, and in buildings of any type. In Serlio’s method, this was the main offspring of architectural “dissolution” (or disassembling), and the basic ingredient of architectural design, i.e. re-composition. Pagan idols had to be broken down; only their fragments could be used, purified ingredients in the building of a new Christian architecture.

All the way, Serlio was aware of, and attuned to, the purpose and limits of his architectural method. Serlio turned architectural design into an assemblage of ready-made modular components. These were not actual spolia, but compositional design units, part to a universal combinatory grammar and destined for identical replication. Giulio Camillo’s rhetoric reduced the imitation of Cicero’s style, hence all literary composition, to a cut-and-paste method of collage and citation. Serlio’s treatise did the same for architecture. His theory of the orders was the keystone of the entire process. Serlio couldn’t standardise the building site (that would have made no sense in the sixteenth century), but he could standardise architectural drawings and design.

Serlio knew full well that his simplified, almost mechanical approach to design would entail a decline in the general quality of architecture. Many critics across the centuries have indeed frowned at the models and projects shown in his Seven Books. Serlio’s designs have often been seen as repetitive, banal, ungainly or chunky; lacking in inspiration and genius. But Serlio did not write for geniuses. His treatise was a pedagogical work, not an architectural one. As Serlio tirelessly reminds the reader, his method is tailored to “every mediocre”: to the “mediocre architect” – the average, middling designer. Today we might say that Serlio’s treatise aimed at creating an intermediate class of building professionals. Michelangelo and Raphael had no need for “a brief and easy method” that turned architectural invention into cut-and-paste, collage and citation.

Knowledge can be taught, not genius. Serlio’s pedagogical structure and design method were parts of an overarching ideological project. Serlio’s method promises uniform and predictable architectural standards. These are perhaps banal, or monotonous, but that’s the price one pays to make “architecture easy for everyone”. And it is a price Serlio was willing to pay. Serlio’s concern was the average quality of building, not the artistic value of a few outstanding monuments. This was a most unusual choice for an artist of the Italian Renaissance – an iconoclastic, almost revolutionary stance. Serlio’s worldview was not one in which the misery of the many was contrasted by the magnificence of a few. Serlio pursued the uniform, slightly boring repetitiveness of a productive, “mediocre” multitude. This was an ideological project, but also a social project, ripened in the cultural context of the early protestant Reformation. It is a position that evokes and preludes well-known categories of modernity.

Sebastiano Serlio, Livre Extraordinaire de Architecture [...] (Lyon: Jean de Tournes, 1551), plate 18.
Sebastiano Serlio, Livre Extraordinaire de Architecture […] (Lyon: Jean de Tournes, 1551), plate 18.

* Footnote to this translation

This is a translation of the introduction to my book Metodo e Ordini nella Teoria Architettonica dei Primi Moderni (Geneva: Droz, Travaux d’Humanisme et Renaissance, 1993), edited, abridged, and adapted for clarity, but not updated. That book in turn derived from my PhD dissertation, supervised by Joseph Rykwert, researched and written between 1984 and 1989, and defended in the spring of 1990. Heavily influenced by Françoise Choay’s La Règle et le Modèle and by works of literary criticism by Terence Cave (The Cornucopian Text), Antoine Compagnon (La seconde main ou le travail de la citation), and Marc Fumaroli (L’âge de l’éloquence), all published between 1979 and 1980, my enquiry on the use of visual citations in Renaissance architectural design was evidently in the spirit of the time: post-modern architects in the 80s were passionate about citations (or the recycling of precedent, otherwise known as reference, allusion, collage and cut-and-paste); they were equally devoted to architectural history, and particularly to the history of Renaissance classicism. My aim then was to bridge the gap between those two sources of PoMo inspiration, showing that Renaissance architecture was itself, quintessentially, citationist. How could it have been otherwise, since the main purpose of Renaissance architects was to revive, literally, the buildings of classical antiquity – piece by piece? Thanks to the first studies of Lina Bolzoni on the sulphurous Renaissance philosopher and magician Giulio Camillo, and to my then girlfriend, who was studying Renaissance Neoplatonism (and is today a known specialist of that arcane science), I soon found evidence of an extraordinary link – biographical, ideological, and theoretical – between Giulio Camillo and Sebastiano Serlio, and I wrote a PhD dissertation to explain the transference of the citationist method from Bembo’s Prose to Camillo’s Theatre to Serlio’s Seven Books – and ultimately to Serlio’s architecture.

Unfortunately, in the process, I also found out that the citationist method in the 16th century was a tool and vector of modernity. It was a mechanical method, made to measure for the new technology of printing; it was also in many ways a harbinger of the scientific revolution that would soon follow. Besides, the citationist method was more frequently adopted by Evangelical and Protestant thinkers (particularly Calvinist), and it was condemned by the Counter-Reformation. None of this would have pleased the PoMo architects and theoreticians who were then my main interlocutors.

Fortunately for me, they never found out. When my book was published, in 1993, the tide of PoMo citationism was already receding. Investigating the sources of citationism was no longer an urgent matter for architects and designers. My book was published in Italian, in an austere collection of Renaissance studies – few architects would have known about it, let alone read it. It received some brutally disparaging reviews, as due, by some of Tafuri’s acolytes, because they thought, without reading my book, or misreading it, that I was bringing water to the PoMo mill. I wasn’t. But at that point that was irrelevant. We had all already moved on.

I was pleasantly surprised when, a few years ago, Jack Self commissioned this translation for publication in Real Review (the translation, by Fabrizio Ballabio, was soon thereafter partially republished in Scroope, the journal of the Cambridge School of Architecture, at the request of Yasmina Chami and Savia Palate); and I was of course more than happy when my colleague Alessandro Bava asked me to review it for publication in the B-Pro journal of Bartlett School of Architecture. As we all know, collage and citation are becoming trendy again in some architectural circles – for reasons quite different from those of the late structuralists and early PoMos that were my mentors when I was a student. I have somewhat mixed feelings about the current, post-digital revival of collaging, but I would be happy to restart a discussion we briefly adjourned a generation ago.

Mario Carpo (March 2022)

Publication history:

Metodo e Ordini nella Teoria Architettonica dei Primi Moderni. Alberti, Raffaello, Serlio e Camillo (Geneva: Droz, 1993). 226 pages. Travaux d’Humanisme et Renaissance, 271

“Citations, Method, and the Archaeology of Collage”. Real Review, 7 (2018): 22-30, transl. by Fabrizio Ballabio and by the author; partly republished in Scroope, Cambridge Architectural Journal, 28 (2019): 112-119

Suggest a Tag for this Article
disk turned steel. 1965
disk turned steel. 1965
HANS ULRICH OBRIST Interview with GETULIO ALVIANI 
discovery of light, GETULIO ALVIANI, HANS ULRICH OBRIST, immersive, raisonnée, structures
Hans Ulrich Obrist

hans-ulrich.obrist@serpentinegalleries.org
Add to Issue
Read Article: 5504 Words

10 April 2015, Milan, Miartalks

First edited transcription, Paola Nicolin 

Hans Ulrich Obrist: I would like to start right from the beginning. You told me about your uncle, but above all about the importance that Leonardo Da Vinci has always had in your work … 

Getulio Alviani: As a child, in my first years of school in Udine, the fair of Santa Caterina was held, where there were stalls with books and other things; here I came across two volumes, which I bought with the few cents I had then: one on Beato Angelico and one on Leonardo Da Vinci. I lived in the countryside back then and therefore I loved nature very much. I loved seeing birds, crickets, moles, foxes, and in this book by Leonardo there was the “bestiary.” For me, it was great, because I thought it was wonderful that a man knew all those things that I experienced daily, but that I knew absolutely nothing about. So, I fell in love with Leonardo Da Vinci, and studied his drawings in small format, because at the time there were no books with colour photographs or with enlargements. I remember a surprising thing that I always have in front of my eyes, which is how he had drawn the wind. For me, thinking that the wind could be drawn was incredible. 

From the early years of my life, I lived with two uncles, one of whom was of Austrian origin and the other born on the border with Yugoslavia. They were both over 50 years older than me, so I was always alone and surrounded only by everyday things, plants, and animals. There were those who worked as farmers, doctors, streetcleaners, carpenters … I saw them all and I wondered, for example, “who knows why someone is a carpenter?”. … I got to the point where I asked myself, “Why do I live? What am I capable of doing?” I realized then that I loved doing things with my hands, and I wanted to see. Then I began to get interested in this, and to discover, above all, that all I had in my mind were not images, but “impressions” (for example, I now look at all of you, I see you, but tomorrow I will probably not remember your faces; what I will remember is the feeling I felt, whether there was empathy or not).  

With my brain I see things; for this reason, I became interested in the world of seeing and doing, and I started by going to see, for example, how an old sculptor near my house made the plaster casts for the statues destined for the graves in the cemetery. For me, seeing was the fundamental thing: seeing and knowing – for example, that plaster becomes hot with water, that if clay dries up, it breaks – and so I began to understand what the world of doing is. I started living always like this – until I did not want to do anything anymore [he laughs], like today, where everything is distorted, distorted, and exploited, because torturers and cops have taken power. 

HUO: This idea of ​​making is very clear and we will return to it later, talking about your inventions with aluminium. But I wanted to start by imagining building your catalogue raisonnée: looking, for example, at the publications of your work, you can see that they often start with the geometric line drawings of the 1950s, and you have mentioned before the constant presence of geometry in your work. Can you tell me about these early works, these drawings that arise from the curiosity of seeing? 

GA: Mine was a series of observations, in general, but always a bit shifted. As a boy, I spent a lot of time in the studio of artisans, and then of architects – much older than me – and I went to take measurements with them and did all those things that intrigued a boy. It sometimes happened that some of them went to paint in the countryside, and painted horses, for example – even if they were actually slightly futuristic horses, like those of Marcello d’Olivo; or of Mimmo Biasi, who instead had a strong interest in vegetables, plants, which then underwent a process of abstraction. 

I have to admit that I did not know what to do, because I did not want to paint what was already there and looked perfect as it was. I wanted to catch something like the threads of light in the sky; I thought that the energy was passing in there – and I wondered how it was able to pass, because I could not see it. Then, at the time, there were the first telephone lines, so I wondered “maybe that’s how rumour travels, will the message stay the same, or be changed, and in what way?” For me, there was mystery in all this: I liked that even more, the mystery, trying to understand these things. Then I became interested in these free geometries, compositions of threads of light that crossed, intersected, overlapped – there were dozens of images in the skies of the countryside.  

However, after doing some curious work on the matter, I quit, because I thought I had exhausted the subject. I have never done things out of duty; I have done them as a game, because I have always had the pleasure of doing, of discovering, of seeing. They were, therefore, limited drawings, since I was about twenty years old at the time and everything I did was for pure pleasure. For example, in that surface [he indicates a painting from the catalogue] there is a black, but when it is hit by the light it becomes white, whiter than any other white, and this was for the light. For me, these were discoveries, thinking that the white which comes out of black is whiter than “true white.” They were conversations with matter, simple non-transcendental questions… and slowly I began to live like this.  

reflection relief with orthogonal incidence, steel. 1967, 5x480x960 cm, modules 5x80x80 cm
Figure 1 – reflection relief with orthogonal incidence, steel. 1967, 5x480x960 cm, modules 5x80x80 cm

HUO: And after this phase come the “structures.” In this, we see a lot of the world of productive work, more than the world of art. Can you tell me about this epiphany that led you to build the structures, and how you discovered aluminium? 

GA: I had participated in a competition promoted by an electrical material company in Brescia (AVE – ed.) And I had designed a valve which, compared to the previous ones, was very innovative. The prize, announced by Domus, was awarded to the architecture studio, but they told me that whoever designed the valve could go to work for the company that organized the competition, to follow the production phase. So I went to Vestone (a town in the province of Brescia – ed.), where the factory was based, and there I discovered the world of more “committed” work. Because until then, for me, the world had been one of “craftsmanship”; there instead I learned a world of “doing”, with large machines, industrial materials, and many people involved. And there among the little things, I discovered new worlds, from melamine to silver contacts, from castings to presses – because I took care of both the execution of this first project of mine, and took on the role of graphic designer for the company’s product catalogues. In this context, I found myself for the first time handling aluminium pieces coloured green, red, and yellow – which were basically mirrors. Having seen these perfect mirrors in metal was a surprising innovation. I said to myself, “but how does this mirror work?” Of course, I knew why the mirror reflected, but never had I thought about the fact that a mirror might not be able to break, or even bend.  

Then, in one of these small workshops that I attended in the province of Udine, I went to dig with some cutters under this mirror, to see what was there. Initially it was all black, with a strong smell of sulphur, but I persisted again, and then a blinding light came out, stronger than sunlight! And from there, I understood how important light was, and that this material could accelerate light, just as a lens causes the sun’s rays to burn the ground.  

HUO: You always have a lens and a measuring tape with you, right? 

GA: I have two friends, who are the greatest friends I’ve ever had in life, I always have them with me, and they are the lens and the ruler. They have never betrayed me, they are always calm, safe and make no mistake.  

HUO: This is now where we can talk about the “discovery of light”. The interesting thing is that this research does not initially enter the world of art in Italy, but instead makes a first unexpected appearance passing through Ljubljana and Zagreb. I’m interested in this passage, because when I was a student I met Julije Knifer in Sète, France, where the artist had retired in the 90s, and he talked to me a lot about the Gorgona. You, Getulio Alviani, were there, at the moment of the birth of that movement, so I would like to understand how this meeting of extraordinary characters took place. 

GA: I was very attracted to Eastern [European] countries, because I have a mania for difficult things, those things that others don’t do. Everyone can do the easy things. Going to Paris, for example, was very simple, but going to Yugoslavia was quite another story. Everything was different there, even the smell of the air.  

My motivation was due a little to the fact that these countries were representatives of Central Europe, the land that my uncle, who was born in Austria, came from, and on the other hand I was fascinated by this completely different world, then beyond the “curtain” – for example, to get a visa took months, you had to have valid reasons (which in my case were linked to family reasons, since my mother and my aunt were born in places that became Yugoslavia). The roads were different, the people as well … in short, Yugoslavia at the time was another world. Furthermore, I must admit that unlike all other parts of the world, where there was a certain atmosphere of joy and lightness, Yugoslavia was a more introverted, more reflective, more intimate, and poorer land. I like poverty a lot, because in poverty many things can be solved; while in wealth nothing is ever solved – contrary to what today’s rulers think, who aim at riches, their riches, to pretend to solve problems. Problems are solved when there is simplicity and brains, and things are done for the sake of others; while today there is a lot of imbecility combined with wickedness that only causes abuse.  

So, I landed in Slovenia. I had made two small surfaces of milled aluminium, and placed them on a radiator in a small workshop, where they were noticed by Zoran Krzisnik, who came to this workshop to have furniture made. At the time, he was the director of the GAM in Ljubljana – which was very advanced in the world; it was the first city beyond the Iron Curtain to want to do innovative things, while elsewhere the situation was very stale. So Zoran Krzisnik saw these two little things, two small plates in fact, and asked me what they were. I wasn’t sure what to tell him, so I told him how I had made them. He asked me if it was possible to make some larger ones, about one metre by one metre, and that if I could he would hold a small exhibition for a small gallery he had in Ljubljana. It was called Mala Galerija, which means precisely that: small gallery. He invited me to visit it, and then organized an exhibition. And some time later, in 1961, I made this presentation, and then learned that in the meantime Krzisnik had curated exhibitions by Zoran Mušič, Giuseppe Santomaso, artists from the Ecole de Paris, and many others. Since then, these works of mine have allowed me to live in Eastern Europe For some time. 

I have continued to have a great love for crossing the border, going beyond: Slovakia, Poland, Lithuania, up to Russia. I learned from Krzisnik that at that time, in Zagreb, there were other young people exhibiting things similar to mine during the same period. So I went to Zagreb and set out to find out what was happening, and if the work was like mine. But at Gradska Galerja I found very different pieces; they had a spirit similar to mine, yet were completely different things, and so I saw the work of Almin Mavignier, Julio Le Parc, François Morellet, Marc Adrian, Ivan Picelj, and Julije Knifer. It was the “New Trends” exhibition, organized for the first time by an artist, Almin Mavignier. There, the whole world opened up for me. Krzisnik was organizing the Biennale of graphics at the time, which was at the forefront of the world of graphics, and therefore many scholars – such as Umbro Apollonio, Giulio Carlo Argan and many others – arrived in Ljubljana. In Udine that would never have occurred; the director of the Tate, or of the Moscow museum, or Umberto Eco arriving. Instead, I met everyone there, in Ljubljana, in a moment, and that world became my second home.  

It was in this context that a young person was listened to for what he was capable of doing, which I thought could never have happened in Italy. For example, the Studentski Centar in Zagreb [The Student Center] was a large experimental centre run by artists and critics, directed by Brano Horwett. There, they invited me to create silk screen works, and so I started to print them – not even knowing what they were exactly, but obtaining surprising results of crossed, overturned, superimposed, negativized, positivized lines. Then, when I came to Milan (where the headquarters of the factory I worked for were) I was able to show this kind of research to Lucio Fontana, and then to Paolo Scheggi, and they too began to work with this technique. Then Brano Horwett came to the Galleria del Deposito to develop all these graphic techniques, which in Italy had never even been thought to exist. We were involved in the fact that serigraphy could be done in series, and everyone – Max Bill, Richard Paul Lohse, Konrad Wachsmann, Victor Vasarely – explored this field, which was born from [the East]. And this is interesting.  

cube with graphic texture opalescent pvc sheets, silkscreen and light. 1964-69, 330x330x300 cm
Figure 2 – cube with graphic texture opalescent pvc sheets, silkscreen and light. 1964-69, 330x330x300 cm

HUO: One of the important aspects in interviews is that of “protesting the forgetfulness that exists in the world”, and there is a character who is rarely talked about today but who is very important: the person who set up the exhibition. The exhibition itself is often forgotten, there is an amnesia in the art world about it. I would like it if you told us a little about Edo Kovačević and what you learned from him. 

GA: I learned everything from him. He was a figurative painter who took care of the installations in the Gradska Galerija in Zagreb; before then I had never thought that my works could be exhibited like this, suspended, supported, and so on. I thought they were simply “squares”. In fact, when I then held an exhibition of mine at Gradska, my works were about twenty “little things”, but he turned them into an eight-room exhibition, making them extraordinary – not through “effects”, as might happen today by focusing lights on them, but simply by placing one work on a background, one on a base, one as a small backdrop: and so with three surfaces, a room was set up.  

Kovačević was very simple and creative, I learned a lot from him – and, in fact, I have never had a work hung on my walls at home. I keep them in the garage, because the works have to be exhibited for a short time, otherwise the eye gets used to them and you can’t see them anymore.  

I look at the works for a short time and then put them aside, to then retrieve them months later and try to understand if they are still valid or not. My impression is that the works must be done for exhibitions, so that they communicate with each other: one must see number one, number two, and understand what they mean as one line. This is what I still do now. On the other hand, I have set up more exhibitions of my colleagues work than of mine, because in this way I really discover the works, what they are and what they represent. 

I believe that the works must be kept in the head. I have a collection of works myself, but I never see them. I got them all by making exchanges: Fontana to Bill, Lhose, Albers, Mansurof, to Nelson, Kelly or Anuszkiewicz….  

The first exchange was in the early sixties, with Fontana: he asked me for something, I brought it to him and he said to me: “What do you want [for it]?” and I replied that I did not want anything, but timidly I proposed that he give me one of his works – and so it happened immediately. From then, I received everything through exchange. This then also enabled me to hold exhibitions of those artists, because I had so many works in hand: everything was possible because I had the works, avoiding transport and all the tasks required to make an exhibition that back then seemed insurmountable.  

HUO: All of this leads to your work as a curator. Andrea Bellini, who has been talking to me about your work for many years and is the origin of my research, was insistent that we talk about you as a curator. You are “the” curator of programmed art, and you have also written a lot about your colleagues, so it would be interesting if, after Ljubljana and Zagreb, we now arrive in Italy, with the N Group, and Programmed Art.  

GA: Immediately after the exhibition with Zoran Krzisnik in that small gallery, he asked me to curate a selection of works by our group of artists for the Ljubljana Biennale. So I began to collect works by those I esteemed – because otherwise I would not have had any interest: I wondered if the artist should not exist, but only the work; if it had, as it must have, a meaning and a dignity of its own to exist. And so I curated the Ljubljana Biennale. Later, I spent many years in Venezuela, directing the Jesus Soto Museum.  

HUO: Soto told me about this abandoned museum in Ciudad Bolivar and I would be interested in understanding how an artist experiences a museum in a curatorial sense. What is your vision of that today? 

GA: Exhibitions were held, and in this way I was able to see the cities and meet those who, perhaps because of their age, would not be able to do it in the future. There was always someone who hosted me. Jesús Rafael Soto was a close friend of mine, I often went to stay with him in Paris, or with his fellow Venezuelan, Otero. One day, he told me that he intended to build a large museum, and asked me to collaborate with him by gathering all the artist friends I could. So I did: from Sérgio de Camargo to Toni Costa, to Lucio Fontana, Gianni Colombo and many other good artists. 

I could not go to the inauguration, but then, after a few years, Soto called me and told me that his museum was in ruins: “se lo comiendo el diablo” [the devil is eating it], and asked me to go and see the situation, and give him a handrestoring it. So, during a Holy Week in the 1980s, I went there and saw this museum – designed by Raul Villanueva, a good architect and friend of Le Corbusier. The museum consisted of a series of huge pavilions, located in the middle of the savannah. Unfortunately, the situation was terrible; there were bats, snakes inside, the works had been ruined and were mouldy on the walls. There were about forty people who worked there: photographers, guides … and so it was that I lived in Venezuela for four or five years and worked to completely renovate it. 

HUO: Regarding Soto, and other Venezuelan artists who work a lot on the kinetic, there is one thing we haven’t talked about yet, and that is your surfaces. At a certain point, the series of “vibrating texture” surfaces begins. In a conversation with Giacinto di Pietrantonio, you said that it would be nicer to think that “neon has chosen Flavin, mirrors Pistoletto, and aluminium has chosen me”. Why did you switch from aluminium to vibrated surfaces? 

GA: Actually, after having been the art director of an aluminium factory, I had perfect, wonderful machinery at my disposal. I’ve never had a studio; I worked where they were: if, in a particular place, there was a nice factory that produced a nice material, I went there and did something. And so, being in the aluminium industry, I had these perfect tools at my disposal. That’s how it all started. I must admit that I have always done everything by myself, because at the time everything was possible: I was alone in a factory of thousands of square metres, I was alone and I was happy; I liked doing. Today, all of this would be impossible, but back then it was natural to do whatever your brain told you to do.  

HUO: In the book New trends: Notes and memories of kinetic art by a witness and protagonist, you write that the artist “is not the cult of personality, protagonism, commercialization, private galleries, elite art, fetishism, the unique work, the social purpose, the interpretation, the metaphor, the mystification, the strategy […]”. In another text I found you say that “to be called an artist is an offense, one could always speak of artifice, of something new, but I think it is more correct to speak of a plastic creator, a designer, a student of perceptual problems, an artist is synonymous of mystifier”. I would like you to tell me about your “expanded notion of the arts”… 

GA: Since I’m a physicist, I don’t like telling stories. [I don’t like] the word “creator” … lies are “created”; they are very easy to create. To be able to say things, they ought to be verifiable, tangible. If someone tells me “on your surface the light behaves like this”, you can go and see it, and you have the opportunity to see that it is true that it behaves like this. That’s not like someone who throws a stain on the ground, and then that becomes, say, “the intolerability of social life”. They say imagined things! 

Therefore, I love things, and I care that they have the dignity to exist; as for me, I have nothing to do with it; they must have the dignity of existing. Nobody knows who invented reinforced concrete, paper, the first bricks; nobody knows anything, but these objects exist and have been made. Everything has been done, things remain and, fortunately, people leave.  

One of my favourite things is to exhibit colleagues who are better than me; partly out of gratitude, because in this way I make them continue to live, and partly because in this way they have no other influences. For example, when I started collaborating with the museum in Bratislava, an exhibition relationship that lasted about ten years, I exhibited only artists who are gone: Sonia Delaunay, Joseph Albers, Lucio Fontana, Bruno Munari, Olle Baertling, Max Bill, all of whom represented something fundamental in the art world through art, and not through words or stories. The stories may be right, but they weaken the function of the eye: we receive 90% of our information through the eye; if I had to speak what I have in front of me in the blink of an eye, I would spend years saying nothing, telling unlikely stories. On the contrary, in a split second, I see everything, and everything is verifiable. One of my passions is synthesis, so it is obvious that I love the eyes. For me the eyes are everything. 

disk turned steel. 1965
Figure 3 – disk turned steel. 1965

HUO: This is beautiful and could already be a conclusion, but I still have some urgent questions. In fact, when you talk about the synthesis of art, you make me think of Max Bill… 

GA: Max Bill has been a lot, everything, to me. We often saw each other in Zurich or Zumikon or in other parts of the world. We didn’t talk [much], we communicated with synthetic words. But when we talked, the topics were quite another thing [compared to art]. We telephoned on Sundays. I always knew, ten minutes before our call, that I was dumber than I would be afterwards – with regards to everything we talked about, his turtles, the roads, the travels, everything. Because whatever Bill told me, he opened my brain, like Vix VapoRub. He was my base, his was a total critical force, first of all towards himself: [he believed that] something that was not true had no right to exist.  

HUO: And like Max Bill, who was an artist, architect, and educator with the Ulm school, you too have continued to be a designer, architect… 

GA: Yes, but never as a profession. I have done sets, some residences, a boat, I have dealt with urban planning; but I am not a craftsman, much less able to reap any benefits that were not mental. 

HUO: You have also done graphic design, for example creating [work for] Flash Art. 

GA: [Giancarlo] Politi came to me and showed me a copy of Flash Art, which at the time was innovative because at the time there was only Selearte, a magazine that devoted very little space to modern art, just a few quotes. Giancarlo, on the other hand, had made this magazine, which in the first issue had the title in “football pools” [font]; so, from the second issue, I gave him the logo again, all in lowercase Helvetica. Throughout my life, I have made many posters, layouts, catalogues, everything that had to do with graphics.  

HUO: You started making more “immersive” installations, such as those with mirrors, and many environments, so … in a certain sense architecture and setting are synthesised in your work.  

GA: Yes. For example, in this environment [he points to a photo from the book], you literally enter the middle of the colours, but in reality they are not there, the only colours are the fixed ones of the walls. By touching the metal plates that reflect the colours, yellow becomes black, red becomes yellow and everything is mixed and the resulting images are unrepeatable. There are no engines, because I’ve never loved engines. Instead, I love that the brain sets itself in motion. 

HUO: There is also the “tunnel” which is very nice, can you tell me about this job? 

GA: Do you know, I saw this work for the first time a couple of years ago, even though it was made about twenty years ago. I went to the place with Mario Pieroni and Giacinto Di Pietrantonio and they told me that they had a series of abandoned spaces. They asked me what I would do with them, and I replied that I would make lines. I made a drawing. They then had a guy make it, who was pretty good at it.  

HUO: You told me before the conference that it’s also important to have fun, and today many artists work on games. You invented a game, in 1964, using aluminium plates, didn’t you? 

GA: It’s a very simple thing. There are two aluminium plates that rest on a surface and then there are two discs which, by reflecting, multiply. Unpredictable images can be generated, but only with the hands. And we are always surprised by what we ourselves do.  

HUO: In my interviews, I often ask what the unrealized project is. There are many categories of unrealized projects, those that are too big, utopian, censored, too expensive… which one is yours? 

GA: I must admit that my restlessness is always animated by what surrounds me. I have never had a studio, much less an assistant, as Karl Gerstner or Enzo Mari or Victor Vasarely or Julio Le Parc or François Morrellet may have … although very good, they all have had and have real businesses, but I did everything by myself – and above all, I did it … for years, and [I don’t do it] anymore because I no longer find pleasure in doing it. 

In 1970, I composed the Manifesto on the “Pneumatic” Space. You will understand that it is absurd that a bus always measures from 100 to 200 cubic metres, both when it is full of people and when it is empty, or that a car occupies 5 square metres both when it is stopped and when it is in movement. Absurd! It is a hallucinatory thing. Although I love the cars on the highways, seeing the city submerged by what I call obscene, ugly, frightening “bagnarole [bathtubs] di tin and stucco” is terrible. Cars must be in motion, because otherwise they wouldn’t be called cars, they’d be called something else. My concern, therefore, lies in trying to minimize the obstruction and presence of the cars when they are not working: this is the Tire Space. I dream that the spaces could be pneumatic, transformable, transportable from one place to another. It was the first impression I had from Konrad Wachsmann, who I attended in Genoa when he had to design the port (a project that was then given to another person in his stead). Wachsmann had an idea to make the port of Genoa expandable and shrinkable: are the boats coming? It expands. It’s empty? It shrinks. Is there no longer any need for the port? I undo it and take it elsewhere. The pneumatic world, for Wachsmann, is still to come, and I took this position a little from him. I haven’t invented anything; I use things that were already there, and I always give credit to people before me. Bill, Albers, Wachsmann, Gropius; everyone who came before me. … In this way, it is a continuation, because no [new] thing is born without another [that goes before].  

So my future is Pneumatic Space, but to achieve it you need a common will; that is, that everyone is interested. I can make drawings, I have reduced very small spaces to a minimum; you can live in 9 square metres – I have designed a living room for two people which contains everything you need and which is transformable. I like this. In the 60s, I made tables that transform, today we have to remove gravity, so we won’t even need the table anymore. Back then, the table was the solution, today we know we can remove gravity, so the table is no longer needed.  

HUO: Last question. Rainer Maria Rilke wrote that beautiful text in which he gave advice to a young poet. Today there are many young artists here with us. I am very curious to know what your advice is to a young artist in 2015.  

GA: Knowing everything that has been done. Develop intelligence, and try to do something that has the dignity of existing, or that is itself useful.  

She [the work] is the centre, you have to think about what she does: and she has her dignity only if she is not a copy, only if you have made sure that she is absolutely new. Not just for a small circle of people who may not know what is around and are amazed. Today there is a great, terrible crisis: ignorance. And here we are in the homeland of this ignorance … we buy obscene, false, ugly, stupid things. Però in fondo, anche se questa cosa qualche anno fa mi disturbava, adesso mi lascia sereno, perché vuol dire che l’ignoranza di quella gente riceve quello che si merita e qui penso proprio “all’arte”, quella che non avrei mai voluto sapere esistere 

(But in the end, even if this thing bothered me a few years ago, now it leaves me calm, because it means that the ignorance of those people receives what they deserve, and here I think about “art”, the one I never wanted to know exists.)1 

Suggest a Tag for this Article
Figure 5 - Exhibition 'Architettura Parametrica e di Ricerca Matematica e Operativa nell’Urbanistica', Milano, XII Triennale, 1960. View of the exhibition space (Archivio Moretti Magnifico)
Figure 5 – Exhibition ‘Architettura Parametrica e di Ricerca Matematica e Operativa nell’Urbanistica’, Milano, XII Triennale, 1960. View of the exhibition space (Archivio Moretti Magnifico)
Luigi Moretti: The Unity of Algorithmic Language
26/04/2022
algorithmic fitness, Algorithmic Language, critique to empiricism, generative algorithms, Luigi Moretti, parameters, probabilistic outcomes, search space
Marco Vanucci, Marco Vanucci

marco@opensystems-a.com
Add to Issue
Read Article: 9482 Words

“The new art must be based upon science, in particular, upon mathematics, as the most exact, logical, and graphically constructive of the sciences.” Albert Durer

In the newfound spirit that emerged at the end of the Second World War, Rome became the epicentre of a cultural renaissance. Beside the swinging high life impeccably captured by Fellini in La Dolce Vita, the Eternal City shone as a cultural hub, not just attracting actors and film makers to Cinecittà but, rather, gathering artists, scientists, philosophers, architects and engineers.

The Valadieresque Piazza del Popolo was one of the epicentres of the city’s cultural life. At number 18, next to Antonio Canova’s studio and in front of Caffe Rosati, home to the literati, were the headquarters of Civiltà delle Macchine, a magazine directed by Leonardo Sinisgalli and house organ of Finmeccanica (an Italian company specialising in aerospace, defence and security), promoting the new technological and scientific zeitgeist. Nearby, in via Sistina, L’Obelisco gallery hosted Giorgio Morandi and Alberto Burri’s shows as well as the first exhibitions in Italy of René Magritte and Robert Rauschenberg. The second wave of La Scuola Romana (or Scuola di via Cavour) was also in full swing: the Caffè Aragno, on via del Corso, and the art gallery Cometa hosted discussions and exhibitions that challenged classicism in favour of new art forms, such as expressionism. The Italian “economic miracle” was thriving under the pressure of industrial development and the prosperous growth of the real estate market. The development of new infrastructure went hand-in-hand with the expansions of the cities through the construction of entire new neighbourhoods for the affluent working class. The deployment of a new apartment block typology, la palazzina [1], stretched far and wide in many parts of Rome and, beyond, across the country. Many notable examples were designed by the protagonists of a new generation of architects and engineers who, while promoting the ideas of modernism, were keen to establish a link between the new city and its architectural history. In the work of Ugo e Amedeo Luccichenti, Vincenzo Monaco, Pier Luigi Nervi, Mario Ridolfi and Luigi Moretti, the formal principals of Mannerism and Baroque evolved using reinforced concrete. They experimented with a new formal approach and often expressed new structural possibilities: the autonomous articulation of the façade, its depth, the expressive qualities of exposed concrete, as well as the daring structural solutions, were some of the characteristics of this new repertoire.

It is within this context, characterised by the productive tension between the innovative language of the modern avant-garde and the tradition of humanism, that Luigi Moretti became a central figure in the cultural landscape of the Italian post-war period, certainly one of its brightest interpreters.

Besides its lively cultural scene, Rome remained a place filled with traditional values, rituals, and multiple contradictions. The Italian novelist and Federico Fellini’s long-time screenwriter, Ennio Flaiano, described Italy as “the country where the shortest line between two points is an arabesque”. The paradox and inconsistencies of the Italian bureaucracy proved daunting to foreigners and newcomers, however, they were daily routine to the Roman citizens. Moretti navigated this intricate context with pleasure and ease. Many traits of his persona reflected this contradictory environment: he was physically imposing but elegant and refined; eloquent, charismatic and capable of attracting strong feeling of love and hatred; extroverted yet reserved, egocentric but generous with youngsters; an artist with a passion for science, coherent and multifaceted; a keen student of human nature with a strong temperament, which made discussions with him difficult and intimidating.

Moretti, however, had an eccentric side to his character. He rode with his chauffeur through the narrow street of Rome in a black and white convertible Chevrolet with bright red upholstery. One of his collaborators recalled that “he would enter the Roman trattoria like a Renaissance prince, … give precise instructions to waiter and chef…[and] unilaterally decide the menu for all”. [2]

Luigi Walter Moretti was born in via Napoleone III, on the Esquiline Hill, in the same apartment where he lived almost his entire life. He was the son of Luigi Rolland (1852-1921), an architect and engineer of Belgian origins. Having graduated in 1929 from the Royal School of Architecture in Rome, while assisting Professor Vincenzo Fasolo at the chair of restoration, Moretti won a scholarship for Roman Studies. He then worked with archaeologist and art historian Corrado Ricci at the Trajan’s Market, not far from via Panisperna, in Rione Monti, where he later established his first studio. Born one year before the publication of Marinetti’s Manifesto of Futurism, [3] Moretti absorbed the futurists’ conviction in the “magnificent and progressive fate” of technological innovation and translated it into his own theory and practice. His intellectual approach reflected the profile of a nineteenth-century polymath, a mixture of positivistic idealism and passion for the opportunities offered by the new technologies. He paired creativity with methodological rigour; he rooted his knowledge in the humanistic tradition, drawings inspiration from the late Renaissance and the Baroque, while cultivating a sensibility for mathematics and science. [4] For him, mathematics was the field of “purest contemplations” and “applicative wonders”, [5] so art was “to make humans rise to contemplation, to a sort of vivid bewilderment”. [6]

A New Humanism

Unlike the Futurists, who saw history as too heavy a burden to carry, Moretti considered the history of art and architecture as primary sustenance. He understood history as a continuum and Modernism as part of this long narrative. Luigi Moretti thought of himself as the epigon of that ‘mathematical humanism’ that flourished between Urbino and Florence in the quattrocento [7] Seminal figures such as Luca Pacioli and Piero Della Francesca were from San Sepolcro, half way between the Medici court and the Montefeltro, and each authored treatises on mathematics. Pacioli studied mathematical and artistic proportion, the golden ratio and its application to architecture. He taught math to Leonardo da Vinci who, in turn, drew the illustrations of the regular solids in De Divina Proportione [8] . History has it that Pacioli also introduced Albrecht Dürer to the study of the human body which, in the 20th century, inspired D’Arcy

Thompson’ series on the morphogenesis of forms. On the other hand, Piero Della Francesca was trained in mathematics and wrote three treatises [9], covering subjects such as arithmetic, algebra, geometry, solid geometry and perspective. As a young scholar, Piero visited Florence to study Masaccio’s crucifixion in Santa Maria Novella, where Brunelleschi drew the perspective. This collaboration possibly inspired his work for the Madonna di Senigallia where he sought the collaboration of Bramante to help with the perspective. It is not a chance if Piero Della Francesca’s Flagellation of Christ is considered the first ‘scientific’ perspective ever realised. It was still in Urbino where Francesco di Giorgio Martini mastered the art and science of fortifications, designed following the ballistic trajectories of the new firearms technology [10]. In Rome, this tradition spanned from Apollodorus of Damascus to Michelangelo, all the way to Borromini’s divine geometry where the influence of mathematicians such as Kepler and Leibniz cannot be confirmed but it’s likely have played a role. Moretti considered himself to be the incarnation of the baroque spirit. His passion for and study of the Baroque was deeply rooted in the cultural climate in Rome following the First World War, which was the result of a broader re-discovery of baroque architecture, especially by German and Austrian historians [11]. He also had the chance to study with Fasolo and Giovannoni, who were renown scholars of the Baroque. Moretti considered Michelangelo Buonarroti as his spiritual father. Particularly interesting are Moretti’s studies of one of his less known but most emblematic works: the Sforza chapel in Santa Maria Maggiore, which, according to Moretti, was configured as “the fullest expression of [his] architectural genius”, a “living archetype of architecture [in which] the constructive feeling is one with the construction [and where] the material, in every aspect of its nature, is … folded, transformed into a work of art, since … it is ‘felt’ by the architect as something of his own blood”.[12]

In 1964, at the 25th edition of the Venice Film Festival, while Deserto Rosso [13] won the Golden Lion as best movie, the Art Film section (boasting a jury presided over by Giulio Carlo Argan and including Gio Ponti) awarded the 50-minute long Michelangelo [14], directed by Charles Conrad and Luigi Moretti. In the movie, the work of Michelangelo is analysed through a series of unusual shots and points of view on his art and buildings. Moretti explained that “the first purpose [of] it is the right figurative reading of the work, above all to shake from the eyes those thin, abstract and now worn images of Michelangelo’s masterpieces; images [which are] already false in themselves, since photographs [taken] with wide angle [lens … present] images that are almost always impossible in real life. The second purpose … is that of reading according to a true order that illuminates the compositional spirit of the works … [This] is of course the most arduous, and the commentary of the film [is to] try to facilitate it”.[15] In the documentary, Moretti made use of dramatic lighting, in the manner of Caravaggio’s paintings, to accentuate the theatrical atmosphere, and avoided symmetric shots to render the work from an unusual angle. Particularly interesting is his reading of the Cappella Medicea in Florence, where he placed the camera on the ceiling, offering the opportunity to view the compressed interior spaces. Here, the director seems to be influenced by his professor Vincenzo Fasolo, who used to work through axonometric sectional views to unveil the tectonic character and planimetric sequences of space. A similar critical approach would be used by Bruno Zevi, a few years later, to produce the models and the drawings that punctuated Michelangelo’s exhibition at Palazzo delle Esposizioni [16].

Figure 1 - Study on visibility - Studies on visibility for the football stadium (Archivio Moretti Magnifico)
Figure 1 – Study on visibility – Studies on visibility for the football stadium (Archivio Moretti Magnifico)
Figure 2 - Study on visibility - Studies on visibility for the tennis stadium (Archivio Moretti Magnifico)
Figure 2 – Study on visibility – Studies on visibility for the tennis stadium (Archivio Moretti Magnifico)

The New Century of Science

Moretti’s work and approach can be understood by examining the cultural context within which he operated and where a new alliance between art and science was being defined.

At the turn of the century, the proliferation of new scientific theories challenged the axioms of modern physics and introduced ideas of complexity and chaos. Babbage’s first programmable calculator, Ada Lovelace’s first computer programs , and Boole’s binary code, together with the dissemination of Hollerith’s punched card tabulating machine, marked the beginning of the new era of mechanized binary code and semiautomatic data processing systems. In 1936, Alan Turing published On Computable Numbers,[17] describing what will become the Turing machine, and, in turn, his focus on neurology and physiology will eventually pave the way for artificial intelligence. On the back of this experimentation with the first computational machines, multiple applications became possible: fractals, theory of complexity, chaos theory, thermodynamics, neural networks, generative algorithms, etc.

Moretti was also aware of the evolutionary theory of Charles Darwin and, on the pages of the USL Paris Review [18], among a collage of images of Antonelli, Guarini and Botticelli, he laid out images of the morphological evolution of biological specimens taken from D’Arcy Thompson’s On Growth and Form.[19] Moretti’s fascination for biology and natural systems supported his ideas that form can be mathematically described and computed, which became a founding principle in his further search for a new aesthetic in architecture and the arts. These scientific breakthroughs deeply influenced Moretti, who was searching for a more objective approach to the problem of architecture and city planning in the context of the post-war reconstruction.

In 1951, in the pages of Civiltà delle Macchine, Sinisgalli synthesises the new spirit [20]:

“Today, science comes to draw the skeleton of a crystal and to identify the weak points of a beam … These surveys beyond the visible, these searches for comparative phenomena in tools and materials, they allowed us to clarify the meaning of certain provisions which only seemed owned [by] the spirit, and are instead virtues of matter. Art must retain control of the truth, and the truth of our times is of a subtle quality, it is a truth that is of an elusive nature, probable more than certain, a truth “on the edge” which borders on the ultimate reasons … Science and Poetry cannot walk on divergent roads. Poets must not have [a] suspicion of contamination. Lucretius, Dante and Goethe drew abundantly [on] the scientific and philosophical culture of their times without clouding their vein. Piero della Francesca, Leonardo and Dürer, Cardano and della Porta and Galilei always … benefited from a very fruitful symbiosis between logic and fantasy.”

Moretti shared with the futurists his political views, which were aligned with the fascist ideology. At the end of his university career, in 1932, he met Renato Ricci, then the president of the Opera Nazionale Balilla [21] (ONB), who appointed him ONB’s technical director, succeeding architect Enrico Del Debbio. In this role, Moretti designed several youth centres in Piacenza, Rome (Trastevere), Trecate, and Urbino. In 1937, he took over the design and masterplan for Foro Mussolini (now renamed Foro Italico), where he created one of his masterpieces, Casa della Armi (1933), a rationalist structure subverted by the elegant use of curved lines and the masterful control of natural light. In 1938, Moretti participated in the design of the EUR (Esposizione Universale Romana), a planned (but never completed) development in the Southern part of the city, intended to host Rome’s world fair.

In 1942, Moretti disappeared from public life. Once he reappeared, he was briefly imprisoned in 1945 for his collaboration with the regime. In the prison of San Vittore, in Milan, he met Alfonso Fossataro, an entrepreneur and builder with whom he partnered to build several developments, right after the war. Fossataro and Moretti established the developing company Cofimprese, under which Moretti worked on a series of hotel buildings, and realised the Corso Italia complex in in Milan. The il Girasole house , in the Parioli neighbourhood in Rome, belongs to this period (1949) and is considered an early example of postmodern architecture. [22] The Roman palazzina captured the attention of Robert Venturi, who included it in Complexity and Contradictions as an example of ambiguous architecture, halfway between tradition and innovation. In turn, years later, the Swiss architectural theorist Stanislaus von Moos argued that the broken pediment of Vanna Venturi House is a clear reference to Moretti’s project. [23] In the same period, Moretti designed some villas along the Tirrenic coastline: the most famous of which, La Saracena and the nearby La Califfa, are fine examples of mid-century modernism.

During those years, Moretti entertained a relationship with the Roman aristocracy, the cultural elite, and the Vatican. Studio Moretti was in Palazzo Colonna, in Piazza Santi Apostoli, a regal palace in the heart of Rome which housed the famous Galleria Colonna. Prince Colonna occupied the most important secular position in the Vatican, and he constantly received important visitors: from monarchs to cardinals to prime ministers. Moretti’s office overlooked the main cortile of the palace, so that he and his staff (mostly architects and geometri) would enjoy a daily parade of celebrities and authorities, some of who would become clients.

Figure 3 - Architettura parametrica 1960. Football stadium: Diagrams of the curves identifying optimal lines of sight (Archivio Moretti Magnifico)
Figure 3 – Architettura parametrica 1960. Football stadium: Diagrams of the curves identifying optimal lines of sight (Archivio Moretti Magnifico)
Figure 4 - Architettura parametrica 1960. Cinema hall: Diagrams of the curves identifying optimal lines of sight (Archivio Moretti Magnifico)
Figure 4 – Architettura parametrica 1960. Cinema hall: Diagrams of the curves identifying optimal lines of sight (Archivio Moretti Magnifico)

Spazio

The post-war period was the golden age for Moretti: his architectural production blossomed in the context of a striving economy which propelled real estate developments across the country. This is also the period of his intellectual maturity, where Moretti developed his sharpest and most relevant reflections on architectural theory.

Moretti’s reputation with the Roman intelligentsia was compromised by his right-wing political views. Bruno Zevi was probably the one who best understood his talent, but he was also his harshest critic. The world of architecture in Rome was dominated by these two figures, so distant and yet so very close. On the one hand, Zevi:

a Jew and a socialist, exiled during the war by Mussolini; an academic historian, an acute scholar and supporter of the Modern Movement with a predilection for Frank Lloyd Wright and Alvar Aalto. On the other hand, Moretti: a conservative catholic, a supporter of the Fascist regime and an active practitioner banned from academia. They each edited an architectural journal which they used as a means to trumpet their architectural ideas. Zevi was, at one time, Moretti’s best interlocutor and strongest enemy. Despite their rivalry, their relationship could be, at times, relaxed and even civil. What is certain is that they probably shared more than they were ready to publicly admit: Zevi secretly hoped that Moretti would join the Association for Organic Architecture (APAO), a movement founded in 1945 by Zevi himself, Luigi Piccinato, Mario Ridolfi, Pier Luigi Nervi and others, aiming at creating a new school of thought, in open opposition to the reactionary model of the Faculty of Architecture of Rome. Legend has it that Zevi tried to convince Moretti to join APAO, promising to make him the greatest Italian living architect. Moretti refused and was for many years condemned to oblivion by the cultural elite. Despite the antagonism of his many detractors, in 1950, Moretti founded the magazine Spazio, [24] with a clear mission to find connections between different forms of art: from painting to architecture, from sculpture to film and theatre. Spazio burned bright in the Roman intellectual scene and, despite the stigma surrounding Moretti, became a beacon for the visual culture of the time, an elegant cultural project that nobody could dare ignore.

Spazio represents one of the most important moments in Luigi Moretti’s theoretical output. Although the magazine only published seven issues (ceasing publications in 1953), his writings published in the magazine represent his most relevant critical framework and constitute the heart of his theoretical production and cultural legacy.

Moretti was editor, editorial director and writer of most of the articles in the magazine. The opening editorial of the first issue of the magazine is titled “Eclettismo e unità di linguaggio” [25] (eclecticism and unity of language) and can be considered Moretti’s programmatic manifesto. The “Unity of Language” was not intended as a fusion of different artistic languages but rather their consonance: Moretti was aware of the differences between artistic languages, and he knew that, despite some emerging points of contact, they remained separate due to their “algorithmic and close” nature. He used the term algorithmic to describe the tendency of different systems to form the general structure of a building or piece of art. The way, for instance, a particular building deals with the modulation of light, the organisation of space and its bearing was considered by Moretti the algorithmic DNA of that structure. In other words, he conceived of architecture as a “reality of pure interrelations”.[26] Moretti believed that the algorithmic nature of the various artistic languages could finally converge and speak in unison.

“There are some periods of civilization that take shape and character from the splendour of a single language; others, very rare, in which the various expressive languages find harmony (…) and together they reach a dense maturity; they are the happy times of Pericles or of the early Renaissance or of the extraordinary seventeenth century. A unitary language is born, formal process of sorting and classification of the infinite parameters of reality and their relationships. Space thus becomes unitary, resolvable, and expressible in every point, and [a] mirror of a new balanced unity of mankind”. [27] [28]

Then in “Genesi di Forme dalla Figura Umana”,[29] in Spazio’s second issue, Moretti described the role of the human figure in the history of art. While these first two articles for Spazio were concerned with general topics, from the third issue onwards he started to explore more specific aspects that would unveil his operational approach to architecture. In “Forme Astratte nella Scultura Barocca”,[30] Moretti discusses how the non-figurative elements of baroque sculptures present a formal richness that could be subtracted from the composition and yet retain their autonomous aesthetic value as abstract forms. Analysing the Baroque sculptures, he noted that “they reveal some areas of their plastic application resolved in purely formal terms, far from any pre-eminent reference to an objective reality, so that it does not seem arbitrary to know that they belong to the abstract formal world”. A case in point is the sculptural palimpsest accompanying the four figures in Bernini’s Fontana dei Fiumi in Piazza Navona, where the landscape surrounding the human figure retains an autonomous aesthetic value.

The contemporaneity of historical art forms and the relevance of history in the world of today was often questioned and studied by Luigi Moretti. In “Trasfigurazioni di strutture murarie”[31] and “Valori della modanatura”[32] he presented a “close reading” of architectural elements: in the first article he tackles the figurative abstraction of mouldings in Romanic architecture, which he considered to be the most abstract in their pictorial simplicity, and yet very concrete in their constructive logic. Moretti juxtaposed on the same page the images of the Duomo di Pisa and Mondrian’s paintings. Signs, traces, geometric textures used in the pictorial compositions become, therefore, precious matrices to compose architectural plans, sections, and elevations. In the second article, Moretti questioned how cornices and profilescould be considered, rather than decorative elements, as pure form, as the only non-figurative elements of architecture that determine its plasticity and volumetric articulation. In “Discontinuità dello Spazio in Caravaggio”[33] and “Spazi-Luce ell’Architettura Religiosa” he continued to explore the role of light in the dynamic articulation of space. He argued that Caravaggio’s figures are always portrayed from the side, never frontal nor symmetrical, deconstructing mass and space through the interplay of light and shadows, with dynamic results. Here, Moretti made a subtle reference to his project for Corso Italia in Milan where he grafted a cantilevering mass protruding sideways from the urban street front.

Perhaps it is with “Strutture e Sequenze di Spazi”[34] that Moretti produced one of the most relevant critical studies for the culture of his time. In it, Moretti delved into the problem of reading and describing space. If the focus in considering Caravaggio was on perceptive glimpses of space, here the aim was to precisely investigate the relationship between the parts and the whole by studying the sequence of rooms articulated through the compressions and dilations of space. He systematically studies and analyses these aspects through a series of

historical examples: Villa Adriana, Guarino Guarini’s church of San Filippo Neri in Casale Monferrato, Laurana’s Palazzo Ducale in Urbino, and many others. For each of these projects, Moretti produced a series of models where the interior space is represented as a volumetric extrusion. With these, he developed an autonomous spatial reading of architecture not dissimilar to what Eisenman developed in the 1960s and 1970s, with the study of forms as pure architectural syntax. Alongside the models are a series of drawings and diagrams describing the density of the different spaces. Here, the form, the structure and the space itself are represented as a dynamic tension between the immaterial nature of space and its material representation.

It is, however, in “Struttura come Forma”[35] that Moretti elaborated the relationship between structure and form (critiquing the approach that prioritises form over structure) and, for the first time, talked about parametric architecture. Starting from the Vitruvian triad (stability, utility, beauty), Moretti argued that, historically, architecture oscillated between prioritising structure (Brunelleschi, Gothic and Roman architecture) or form (Baroque, Renaissance and 19th Century architecture). He then reflected on the direction function>form, pursed by the Rationalists and the Bauhaus. He considered the “function” as parameters determining the space and its concatenation. These parameters are either very limited, so that space can be easily deduced with scientific rigour, leading to the realm of pure technique (an extreme case of what he called parametric architecture); or these parameters are multiple and not clearly definable, so that the function is necessarily approximate, and only further articulation of the structure can define it more precisely. Here we return to the structure>form approach, where structure is, once again, understood as a complex set of relationships. The text is accompanied by an illustration by a young architect, Guido Figus, who worked on an iterative series of roof structures articulated through folded plates resembling origami. Figus’ drawings are fascinating: rather than proposing an optimum solution, they are exploring a series of possible (parametric) permutations for the structure.

Figure 5 - Exhibition 'Architettura Parametrica e di Ricerca Matematica e Operativa nell’Urbanistica', Milano, XII Triennale, 1960. View of the exhibition space (Archivio Moretti Magnifico)
Figure 5 – Exhibition ‘Architettura Parametrica e di Ricerca Matematica e Operativa nell’Urbanistica’, Milano, XII Triennale, 1960. View of the exhibition space (Archivio Moretti Magnifico)
Figure 6 - Exhibition 'Architettura Parametrica e di Ricerca Matematica e Operativa nell’Urbanistica', Milano, XII Triennale, 1960. View of the exhibition space (Archivio Moretti Magnifico)
Figure 6 – Exhibition ‘Architettura Parametrica e di Ricerca Matematica e Operativa nell’Urbanistica’, Milano, XII Triennale, 1960. View of the exhibition space (Archivio Moretti Magnifico)

An Other Art

The movement initiated with Spazio continued after the magazine ceased publication. On June 26 1954, in via Cadone, Rome, Galleria Spazio opened its doors with its first exhibition titled Caratteri della Pittura d’Oggi (Characters of Today’s Painting). The gallery was established through a collaboration between Luigi Moretti and the French art critic Michel Tapié de Celeyran. Jazz musician, curator, art critic and all-round cultural agitator, Tapié entertained close relationships with art galleries across Europe and North America that allowed him to promote and showcase his roaster of artists. He was also the author of Un Art Autre[36], a compendium about a “new art” of signs and matter, where he promoted and gave wide currency to the French style of abstract painting popular in the 1940s and 1950s called Tachisme. This movement was developed as a reaction to Cubism and was characterised by informality and an absence of premeditated structure, conception or approach (sans cérémonie).

The turning point in Tapié’s career was his friendship with artist George Mathieu. This would soon lead to his meeting with Moretti, through the Roman artist Giuseppe Capogrossi, whose large canvases filled with cryptic glyphs and dynamic forms were disseminated across Moretti’s studio and acted as an inspiration to his architecture. [37]

Moretti was seduced by Tapié; he comprehended his great potential and, with him, he seized the opportunity to promote contemporary art, pursuing the unity of languages and his eclectic vision. Under Moretti’s directorship, the art critic became “artistic consultant” of Spazio gallery. For the first exhibition at the gallery, among the large group of selected artists there were some on the brink of becoming internationally acclaimed: Pollock, Francis and Tobey from the States; Capogrossi and Dova from Italy; Appel and Jorn who, with Wols, formed the CoBrA Group; and Mathieu and Riopelle from France. In the catalogue of the exhibition Moretti wrote:[38] “The intensity, the splendour, the explosion of signs given to the surfaces, the brightness and power of relations, the pure relations these signs compose, are its justification”. He also wrote of “The dramatic beauty, the desperate egoism of these adventurous facts that today occur in art”.

Here, Moretti claimed that painting was of importance only to itself, “only tied to the personal algorithm, to the personality of the artist”. The joint venture between Moretti and Tapié, together with artists such as Mathieu and Capogrossi, represented a clear attempt to find new aesthetic and philosophical ways to make art and science converge.

In 1954, in the pages of the US Lines Paris Review,[39] Tapié claims:

It is time to reconsider the notion of rhythm, no longer by way of the only possible system of whole numbers, but rather by way of real and hypercomplex numbers; the notion of structure, no longer bound irrevocably to the ruler and compass, but to the richer and more general notions of continuity and contingency of present topology, within which classical geometry is now only an extremely specialised little chapter; the notion of content, no longer as a more or less theatrical subject-pretext, but as complying with the norms of scientific psychoanalysis; the notion of space and composition, no longer tied to a static formalistic logic and to an “equilibrium” of the same order, but rather to Galois’ Theory of Groups, to Cantor’s Theory of Wholes, to the present metalogic and to Lupasco’s dynamic logic of the contradictory.

Moretti and Tapié would often wander through the streets of Rome searching for artists and “new voices”. Among them was artist Carla Accardi who, years later, recalled visiting villa Saracena in Santa Marinella with Moretti, Tapié and the American artist Claire Falkenstein who was commissioned to design the villa’s gate.

The Roman architect and the French critic shared a common vision and a commitment to evolve the artistic language. After Spazio, they continued to collaborate for many years, far beyond the closure of the gallery, each of them following their artistic language, but sharing a precise vision: the critic called it Morfologie Autre, while the architect refers to Strutture di Insiemi, a term that Moretti borrowed from the study of Galois’ theory of groups[40]. In 1960, they co-founded the International Centre of Aesthetic Research in Turin, Italy, a facility for the study and exhibition of art, as well as for the publication and dissemination of critical, investigative, or theoretical works on art.

In 1965, they co-authored the book Le Baroque Generalisé: Manifeste du Baroque Ensembliste[41], a beautiful and rare publication where the language of the Baroque is articulated through mathematical formulas. This book synthesises Moretti’s fascination for a more scientific approach to architecture with his love for art, the Baroque and the unity of language.

However, Moretti continued to foster collaboration and intellectual exchange. One such association was with French poet Pierre Pascal, son of chemist Paul Pascal, anti-Gaullist and collaborator with the Vichy government, sentenced in absentia to life imprisonment. Pascal left France in 1944 and took refuge in Italy, where Mussolini initially offered him hospitality at the Vittoriale on Lake Garda before he later moved to Rome. There, he found accommodation at Palazzo Caetani, which became the seat of the Éditions du Cœur Fidèle, a publishing company that Pascal co-founded with Moretti. The Cœur Fidèle would publish a forest of hendecasyllabic and alexandrine verse, and rhythmic prose; from the Persian quatrains of Omar Khayyam to Le Corbeau by Poe (deciphered in his arithmetic, geometric and gematric keys), from the Livre de Job to the Apocalypse of St. John[42]. The last is certainly the most significant: it is an interpretation in French Alexandrine, with sixteen prints of Albrecht Dürer’s Apocalypsis cum figuris[43] taken from the original woodcuts used for the prints of 1498 and 1511. The book is of exquisite quality and it represents the apex of Moretti’s erudition which borders into exoterism, a testament to the belief that his intellectual work was rooted in the line drawn by the great masters of the past. Ricerca Operativa Moretti’s passion for science and mathematics led to a friendship with the engineer and mathematician Bruni De Finetti. They may have first met in Via Panisperna, in Rome, where Moretti, as a young graduate from the school of architecture, opened his studio and where De Finetti, enfant prodige and graduate in applied math from the University of Milan, attended the seminars at the Institute of Statistics. At the time, Enrico Fermi was there leading the ‘Panisperna boys’: Edoardo Amaldi, Ettore Majorana, Bruno Pontecorvo, Franco Rasetti, Emilio Segre [44], a group of bright and young scientists who opened the door to nuclear reaction and, later, to the atomic bomb.

Before collaborating with Moretti, De Finetti had been involved in studies on the economic viability of construction. In the magazine La Città,[45] the architect Giuseppe De Finetti (Bruno’s cousin), invites him to develop a mathematical approach where, thanks to a series of formulas and by establishing a relationship between land value, cost of construction and rental value, they calculated the optimum composition of the building. Such an approach would be further investigated by De Finetti in his collaboration with Moretti. Having spent many years at the University of Trieste, De Finetti arrived in Rome in 1954 as a professor of Mathematics at the Faculty of Economics. was one of the first scholars to lecture on Ricerca Operativa, [46]

(operational research), a branch of applied mathematics which was making its way into the Italian academia and intellectual environment. It consisted of analysing and resolving complex decisional problems through the development of mathematical models and quantitative methods (simulation, optimization, etc.) to provide supporting insights in the decision-making process. It is worth noting that, around the same period and with different purposes, Bruno Zevi was elaborating his theory on Critica Operativa[47], a pedagogic and cultural enterprise which aimed to create a bridge between history and modern architecture. Zevi was advocating the actualisation of those immutable characteristics of historical architecture, read and reinterpreted in a contemporary key. [48].

The problem of establishing a link between theory and practice, between thinking and making, was clearly a defining trait of the Italian culture in the post-war era.

During those years, Moretti was developing his studies on parametric architecture, an approach that consisted in the application of mathematical theory to architecture and urbanism. However, having asked De Finetti to bring his collaboration to this new field of research, Moretti wanted to go beyond the declaration of theoretical principles and, in 1957, they became respectively president and vice-president of the newly founded Institute of Mathematical and Operations Research for Urbanism (IRMOU). With them were a group of young mathematicians, architects and engineers: Anna Cuzzer (then married to Paolo Portoghesi), Giovanni Cordella, and Cristoforo Sergio Bertuglia. Moretti’s idea was to apply a more scientific approach to the challenges of post-war reconstruction in Italy. IRMOU, in turn, aimed at employing mathematical and statistical methodologies to provide solutions that were considered quantitatively and qualitatively more effective for a truly modern country. Bruno De Finetti played a particularly important role, not just as a prestigious scholar but also because he introduced the Institute to the use of computational machines, such as the IBM 610, a fixed-point decimal electronic calculator used for probabilistic computation. De Finetti purchased the machine for the University and installed it in via Ripetta, establishing the institution’s first computing centre. At the time, Moretti was involved in some of the most important commissions of his career. In 1958 he led the team involved in creating the new Olympic Village for the XVII Olympics in Rome (1960).[49] Between 1960 and 1966, following up on the masterplan developed for the Olympics, together with Cafiero, Guidi and Libera, Moretti designed and built the housing project Quartiere INCIS Decima, where the buildings were arranged following the roman castrum.

Abroad, Moretti built the Watergate Complex in Washington (which would become infamous in the wake of the 1972 political scandal) and Montreal’s Stock Exchange Tower, both projects commissioned by the insurance company Generale Immobiliare.

In 1968, he was commissioned to design a sanctuary at Tagbha, on Lake Tiberias in Israel. The project was approved by the Vatican, but was never built due to the outbreak of war between Israel and Palestine. Moretti also had commissions in Kuwait (including the headquarters of the Bedouin Engineers’ Club and Bedouin Houses s) and in Algeria (Hotel El Aurassi, the Club des Pines and a series of schools and residential projects).

Moretti was also involved in the new masterplan for the city of Rome and, with IRMOU, carried out studies to analyse and alleviate traffic in the capital. These projects led to the plan for the new subway branch Termini-Risorgimento, which culminated in the realisation of the Pietro Nenni bridge over the river Tiber, as well as the new carpark under Villa Borghese which opened in 1973. Around the same period, he also realised the project for the Thermal Bath in Fiuggi, where he mastered the used of reinforced concrete.

Figure 7 - Study on Borromini: Sant’Ivo alla Sapienza Roma, 1967. Spatial interpretation: Juxtaposition interior and exterior space. (Archivio Moretti Magnifico)
Figure 7 – Study on Borromini: Sant’Ivo alla Sapienza Roma, 1967. Spatial interpretation: Juxtaposition interior and exterior space. (Archivio Moretti Magnifico)
Figure 8 - Study on Borromini: San Carlino alle Quattro Fontane Roma, 1967. Spatial interpretation: Juxtaposition interior and exterior space. (Archivio Moretti Magnifico)
Figure 8 – Study on Borromini: San Carlino alle Quattro Fontane Roma, 1967. Spatial interpretation: Juxtaposition interior and exterior space. (Archivio Moretti Magnifico)

Architettura Parametrica

Having spent about 20 years searching for the new relationship between architecture and mathematics, in 1960, Luigi Moretti was invited to the Milan Triennale to present the work and studies carried out with IRMOU on Parametric Architecture. While IRMOU’s work mostly focused on urbanism (urban planning, urban flows, etc.), for the exhibition in Triennale, Moretti developed parametric studies on sport and leisure facilities: a football stadium, an aquatic centre, a tennis arena and a cinema. At the time, football stadiums andsports arenas in general were relatively new typologies. In addition, unlike many of today’s venues, they were mono-functional. For this reason, stadia were the perfect typology to establish parametric relationships between different components: the position of the spectators in relation with the goals, the sightlines between every seat and different areas of the pitch, etc. Moretti and his collaborators elaborated mathematical formulas to describe these dependencies. The mathematical models produced data points representing the optimum viewing areas of the stadium. The data points were elaborated using an IBM 610 Auto-Point computer.

Moretti explains the “necessity to formulate new logical chains aimed at identifying new architectural forms and their concatenation, dependent on various and complex functions”.[50] For Moretti, “each logical area that makes up the sequence of this new formulation of architectural thought must be the receptor and projective of mathematical thought, that is to say, it needs to be quantifiable … The solution is based on the determination of the elements conditioning the forms as a consequence of the functions that are required of it. That is to say: solutions based on qualifiable parameters, parameters that, one by one and in their quantifiable interrelation, fix the limits within which we identify and draw the forms that fulfil those functions”. And again, “the definition of the parameters must be called upon to assist the techniques and instruments of the most current scientific thought; mathematical logic, operational research and computers. To the study of this approach and to the new method and theory specified in its schemes and verified by the first exciting results, I gave the name of Parametric Architecture”. Moretti elaborated his parametric manifesto on the pages of Moebius magazine, in an axiomatic text which established the heuristic principles of parametric architecture.[51]

Bruno Zevi was intrigued by this new approach. However, confirming his opposition to Moretti, he was far from being convinced. Following the opening of the exhibition, Zevi wrote a sceptical review of it on the pages of L’Architettura Cronaca e Storia:

“Everything that serves to give us distance from empiricism and rationalism in design should be applauded. Especially in a moment like the current one in which the characteristic of the [working method] of most Italian architects is careless … A parametric method encompasses the tools, procedures, and objectives, but to what end? For these questions, electronic brains are barely useful, brains are needed. If parametric architecture is not to remain a brilliant intellectual exercise, it is indispensable that research is sustained by a high moral inspiration. For now, the idea surprises and fascinates us; tomorrow, it may convince”. [52]

Here, Zevi aired a certain dissatisfaction for the unfulfilled promises of parametric architecture. A scepticism that, beside the great advances in parametric and algorithmic design, many still share today.

However, Luigi Moretti was aware of the “high moral inspiration” required to pursue the new course of architecture. In a lecture at the Accademia Nazionale di San Luca, in 1964, he claims that “the new basic meaning” of making architecture must be identified with the “genius of a new morality, of an interior commitment to working in accordance with justice, in a superior economy, for our fellow men. This imposes a dedication, a seriousness in research and investigations and, above all, an underlying humility”. [53]

Figure 9 - Spazio, n. 7 Rome, December 1952 – April 1953 - Michelangelo. Model of the church of S. Giovanni dei Fiorentini in Rome. Representation of the internal volumes (Archivio Moretti Magnifico)
Figure 9 – Spazio, n. 7 Rome, December 1952 – April 1953 – Michelangelo. Model of the church of S. Giovanni dei Fiorentini in Rome. Representation of the internal volumes (Archivio Moretti Magnifico)
Figure 10 - Spazio, n. 7 Roma, December 1952 – April 1953 from 'Strutture e sequenze di spazi', article by Luigi Moretti. Model of Guarino Guarini's church of S. Filippo Neri in Casale Monferrato. Representation of the internal volumes (Archivio Moretti Magnifico)
Figure 10 – Spazio, n. 7 Roma, December 1952 – April 1953 from ‘Strutture e sequenze di spazi’, article by Luigi Moretti. Model of Guarino Guarini’s church of S. Filippo Neri in Casale Monferrato. Representation of the internal volumes (Archivio Moretti Magnifico)

Epilogue

Moretti passed away suddenly in 1973. In his obituary, Zevi didn’t spare words of either admiration or criticism for his beloved enemy: “He possessed an authentic artistic temperament integrated with a notable if non-methodical culture and an extraordinary professional capacity. He could have assumed a determining role in the depressed Italian atmosphere; but a spasmodic desire for individual affirmation associated with an intellectualism like that of D’Annunzio, greedy for refinements and luxuries, reduced his creativity to insufferable conventionality. A waste in civil and human terms”.[54]

Moretti remained a controversial figure for many years after his passing. His legacy was long ignored or undervalued. However, much of the research and many of the questions raised by Moretti during his architectural life remained relevant and some still haunt architects today. What is the role of history in designing the city of today? What is the relationship between architects and technology? Is technology merely a tool to make or also a tool to think?

Moretti was aware of the necessity to not parametrise all things. He warned against “the dictatorship of the algorithm”. The Roman architect knew that his research was still far from the government of complex phenomena with suitable complex algorithms. He knew that architects “will have to educate the mind to scientific rigor knowing how to leave [their] imagination and expressive freedom intact, since free formal expression, personal lyricism, will always find a place in the spaces that the parametric functions will leave free”.[55]

One year before his departure, Luigi Moretti offered an interesting insight. In this brief excerpt from a conference titled “Technology and the ecological problem”,[56] he warned about the uncritical endorsement of new technologies, exposing the limits of his own thinking. While he seemed to have no doubt regarding the computational turn in architecture, he seemed to distance himself from any technocratic orthodoxy.

The authentic humanism in ancient civilization … was indeed a synthesis and integral consciousness of abstract thought … It is with the Enlightenment that an approximate rationality has entered, the production of algorithmic thought as something absolutely proper, acceptable, indeed dutiful and characteristic of man. … The whole critical situation of today’s world, from ecology to ethics, economy, politics, religion and spirituality is the result of two errors … Precisely:

1) the logic of algorithmic developments without limits;

2) [the validity of] this logic …, whatever the dimensions of the empirical field on which it operates.

Technologies produce mechanisms [that are] expressions of particular logical chains, dependent [on] or aroused by other logical chains. … Everyone now feels that it is not possible to continue with them indefinitely. This is obvious; … in the laws of technological development there is a need for a limit. … There is an asymptotic point for any technology beyond which it is in vain, it is foolish to proceed. … The limit of a technology is always inherent in it; it is equivalent to its death and death is an inseparable moment of the vital process in every organism …: we take logic and its algorithmic developments as valid whatever the dimensions of the empirical field on which they operate. This is false: the logical structures are NOT valid for each dimension of the field on which they are affected.

When I was preparing the exhibition of parametric architecture, which had this statement as a conducting background, Prof. De Finetti, one of the most acute intellects in today’s world, suggested to me as a slogan and introduction a stupendous step by Galileo, which roughly says: “if you want to make an animal fifty times bigger you will not have to enlarge the bones and structures fifty times, you will have to change material and study another completely different structure, otherwise you will make a fantasy monster” …

Now, in today’s world, the dimensions are enormously changed; … we continue to use concepts and logic, in the empirical life of our global community … and mustn’t the exceptional dimension of our empirical world lead to a completely new formation of knowledge (of thought)? How can we have logical chains that conclude with certainty as a good old syllogism? As we know, they will be only probable conclusions and consequent statistically verifiable situations. This concept of truth according to probability and statistics has for some time now come alive in every beat of our thought. [57]

On the one hand, he warns against the application of algorithmic processes to all the dimensions of knowledge, establishing boundaries to what can be known through algorithms and what should be left in the hand of the architect. On the other hand, the critique to empiricism leads Moretti to re-affirm a new form of scientific thought that advances by probabilistic attempts rather than by absolute truths. Thus, not dissimilarly from the logic of generative algorithms, Moretti understood that, in the new world, the algorithmic fitness of different parameters is to be found within the boundaries of a “search space” where truth is constantly fluctuant and, far from being univocal, has multiple probabilistic outcomes.

References

1 “Palazzina. This term, which came into use in the Renaissance as a term of endearment for palazzo, originally designated small buildings located within parks and gardens intended to offer asylum during parties and hunting parties … La Palazzina … thus began its disruptive parable towards the city in the 1920s, replacing the continuous fabric typical of the ancient city [with] a discontinuous fabric in which the building volumes are placed side by side without any formal relationship connecting them, divided only by a thin strip of green, usually divided by the high walls erected on the boundaries of the lots.” (P. Portoghesi, The Angel of History, [Bari: Laterza, 1982])

2 Adrian Sheppard, “Luigi Moretti: a testimony” (Montreal: 2008)

3 Marinetti wrote the manifesto in the autumn of 1908 and it first appeared as a preface to a volume of his poems, published in Milan in January 1909. It was published in the Italian newspaper Gazzetta dell’Emilia in Bologna on 5 February 1909, then in French as Manifeste du futurisme (Manifesto of Futurism) in the newspaper Le Figaro on 20 February 1909. Luigi Moretti was born in Rome on 2 January 1907.

4 “To develop a complete mind: Study the science of art; Study the art of science. Learn how to see. Realize that everything connects to everything else.” Leonardo Da Vinci

5 B.Baldi, Le Vite de’ Matematici, 1587–1595, cit. in F.Abbri, E.Bellone, W.Bernardi, U.Bottazzini, P.Rossi (eds), Storia Della Scienza Moderna e Contemporanea. Dalla Rivoluzione Scientifica all’eta’ dei Lumi 1, 136, TEA, 2000

6 Luigi Moretti, Forme Astratte Nella Scultura Barocca, Spazio n.3, 20, October 1950

7 Andre Chastel introduced the notion of “mathematical humanism” in his book Centri del Rinascimento: Arte italiana 1460-1500 (Milan: Feltrinelli, 1965). Chastel identifies three strands of humanism and specifies that the mathematical one “finds its most important base in Urbino” (41), noting that “the case of Luca Pacioli is not isolated: on the contrary, it well represents the intellectual environment of the quattrocento, an environment in which theory and practice walk hand in hand without, however, adapting themselves to one another perfectly” (47, 49).

8 Luca Pacioli, De Divina Proportione, Aboca Museum, San Sepolcro, 2009

9 Trattato d’Abaco (Abacus Treatise), De quinque corporibus regularibus (On the Five Regular Solids) and De Prospectiva pingendi (On Perspective in painting).

10 Scaglia, Gustina, Francesco Di Giorgio: Checklist and History of Manuscripts and Drawings in Autographs and Copies from Ca. 1470 to 1687 and Renewed Copies, Lehigh Univ Pr, 1992

11 Literary works of architectural history such as Der Cicero by Jacob Burckhardt (1855), Studien zur Architektur geschichte des 17. und 18. Jahrhunderts by Robert Dohme (1878), Renaissance and Baroque by Heinrich Wölfflin (1888), and Barock und Rococo by Auguste Schmarsow (1897), prepare the ground; added to them at the beginning of the twentieth century were Michelangelo als Architekt by Heinrich von Geymüller (1904) and Die Entstehung der Ba rokkunst in Rome by Alois Riegl (1908). In the aftermath of the Great War, came Michelangelo-Studien by Dagobert Frey (1920) and the volume on Borromini by Eberhard Hempel (1924).

12 L. Moretti, op. cit. in Casabella LXX (2006), .78-79.

13 Red Desert, director M.Antognoni, written by M.Antognoni, T.Guerra, starring M.Vitti, R.Harris, C.Chionetti, Italy, 1964

14 Michelangelo: The Man with Four Souls, directors: L.Morfetti, C.Conrad, Italy, 1964

15 L. Moretti e Charles Conrad, presentation to the premier of the movie ’Michelangelo‘ at Circolo del P Greco, Roma, Hotel Hilton, 14 Luglio 1964 (Archivio Moretti Magnifico).

16 P. Portoghesi, B. Zevi (eds.), Michelangiolo architetto (Torino: Einaudi, 1964), with Giulio Carlo Argan, Franco Barbieri, Aldo Bertini, Sergio Bettini, Renato Bonelli, Decio Gioseffi, Roberto Pane, Paolo Portoghesi, Bruno Zevi, and Lionello Puppi.

17 A. M. Turing, On Computable Numbers, with an Application to the Entscheidungsproblem, proceedings of the London Mathematical Society 1937

18 A.Imperiale, “An ‘Other’ aesthetic: Moretti’s Parametric Architecture”, Log 44 (2018)

19 D’Arcy Thompson, On Growth and Form, Cambridge University Press, 1917

20 L. Sinisgalli, “Natura, Calcolo, Fantasia”, Pirelli 3 (1951) 54-55.

21 Opera Nazionale Balilla (ONB) was an Italian Fascist youth organization functioning between 1926 and 1937, when it was absorbed into the Gioventù Italiana del Littorio (GIL), a youth section of the National Fascist Party.

22 Robert Venturi, Complexity and Contradiction in Architecture, The Museum of Modern Art, New York, 1966.

23 Stanislaus von Moo, Venturi, Rauch, & Scott Brown: Buildings and Projects (New York: Rizzoli, 1987),244-246

24 Spazio made its debut in July 1950 taking the form of a grandiose project, capable of combining typographic and contributor quality, investments (editorial staff in Milan, Rome, and later Florence and Paris), international screening (abstract in English, French and Castilian). The director’s writings are numerous and of absolute importance. The editor-in-chief, Agnoldomenico Pica, is the author of several texts and is flanked by recurring collaborators: Umberto Bernasconi, Angelo Canevari, Gino Severini, Sisto Villa, Ugo Diamare. Over the course of 7 issues the magazine has promoted artists and architects such as Carlo Mollino, Giuseppe Capogrossi, Alberto Burri, Renzo Zanella, Antonio Gaudi, Adalberto Libera, Ugo Carrà, Vico Magistretti, Carlo De Carli, Ettore Sottsass, Atanasio Soldati, Gianni Monnet, Vittoriano Viganò, Franco Albini, Carlo Pagani, and Luciano Baldessari. The layout was masterful, governed with skilful technique, taste and originality by the director himself

25 L.Moretti, “Ecclecttismo e Unità dei Linguaggi”, Spazio1 (1950)

26 “For me personally, the search for this secret fabric as a link between the various elements of a work, which renders, or attempts to render, the single forms as interrelated parts to the others, in a consciously inseparable fabric, is the habitual way of consider a work (descendant) above all from the eighteen pages of Galois that opened the new objective world to us as a reality of pure interrelations”. “Ultime Testimoninaze di Giuseppe Vaccaro”, L’Architettura Cronaca e Storia,201(1972).

28 L.Moretti, “Ecclecttismo e Unità dei Linguaggi”, Spazio 1, (1950).

29 L.Moretti, “Genesi di Forme dalla Figura Umana”, Spazio 2 (1950).

30 L.Moretti, “Forme Astratte nella Scultura Barocca”, Spazio 3 (1950).

31 L.Moretti, “Trasfigurazioni di strutture murarie”, Spazio 4 (1951).

32 L.Moretti, “Valori della modanatura”, Spazio 6 (1952).

33 L.Moretti, “Discontinuità dello Spazio in Caravaggio”, Spazio 5 (1951).

34 L.Moretti ”Strutture e sequenze di spazi”, Spazio 7 (1953)

35 L.Moretti, “Struttura come Forma”, Spazio 6 (1952)

36 Un Art Autre Ou il s’Agit de Noveaux Dévidages du Reel (Paris: 1952).

37 In the article “Structure comme forme”, published on the United States Line Paris Review, Moretti defines the mathematical equivalent of what he sees in Capogrossi paintings as the theory of differences, which he develops into a method to design dynamic architectural forms.

42 Pierre Pascal (curated by) Apokalypsis Ioannoy ou la Revelation de Notre Seigneur Jesus-Christ a Saint Jean, more often titled Apocalypsis Iesu Xristi / for the first time paraphrased in Alexandrian verse by Pierre Pascal (A l’enseigne du Coeur Fidele, Roma 1963)

43 The Apocalypse (Latin: Apocalipsis cum figuris) is a series of fifteen woodcuts by Albrecht Dürer published in 1498, depicting various scenes from the Book of Revelation, which rapidly brought him fame across Europe.

44 The Via Panisperna boys (Italian: I ragazzi di Via Panisperna) were a group of young scientists led by physicist Enrico Fermi. In Rome, in 1934, they made the famous discovery of slow neutrons, which later made possible the nuclear reactor and subsequently the construction of the first atomic bomb.

45 The magazine La Città: Architettura e Politica was founded and directed by Giuseppe De Finetti in 1945. Only four issues were published between 1945 and 1946. The aim was to discuss “the study of the future city”. The magazine mainly discusses the problems of reconstruction, the fate of the cities destroyed by the two wars, and the problems of traffic; “the task of rebuilding the city, of giving it back its usefulness and beauty”.

46 B. De Finetti, “Gli strumenti calcolatori nella Ricerca Operativa”, Civiltà delle Macchine, 5, 1 (1957), 18–21.

47 B. Zevi exposed his ideas regarding the relationship between architectural history and contemporary design in the opening lecture of the
academic year, held in the Aula Magna of the Rectorate of the University of Rome, on the 18th of December 1963.

48 In addition to Moretti, the team for the new Olympic Village in Rome was formed by Vittorio Cafiero, Adalberto Libera, Amedeo Luccichenti and
Vincenzo Monaco

49 L. Moretti, “Ricerca Matematica in Architettura e Urbanistica”, letter to Giulio Roisecco, director of Moebius magazine

50 L. Moretti, Moebius, IV, 1 (1971), 30–53.

51 B. Zevi, “Cervelli Elettronici? No Macchine Calcolatrici”, in L’architettura Cronaca e Storia VI, 62 (1960), 508-509, (translation A. Imperiale)

52 L. Moretti, “Significato attuale della dizione Architettura”, in Spazio, Fascicoli(1964). See also: Luigi Moretti, “L’Applicazione dei metodi della
Ricerca Operativa nel campo dell’urbanistica”, in Spazio, Fascicoli, (1960); Luigi Moretti, “Strumentazione scientifica per l’urbanistica”, in : Cultura
e realizzazioni urbanistiche, Convergenze e divergenze, conference proceedings, held at Fondazione Aldo Della Rocca, Campidoglio, Consiglio
Nazionale delle Ricerche, (Rome: 1965).

53 B. Zevi, “Computer inceppato dal dannunzianesimo,” L’Espresso (July 29, 1973), reprinted in Cronache di Architettura 2, 982 (Bari: Laterza,
1979), 145.

54 L. Moretti, “Architecture 1965: Évolution ou Révolution”, L’Architecture d’Aujourd’hui, 119 (1965), 48.

55 “Tecnologia e problema ecologico”, round table with the participation of V.Bettini, S. Lombardini, L.Moretti and P.Prini. Civilta delle Macchine 3-
4 (1972)

56 Ibidem

Suggest a Tag for this Article
Structures, Voids, and Nodes: Leonardo and Laura Mosso’s “Architecttura Programmata” 
29/04/2022
Architecttura Programmata, Laura Mosso, Leonardo Mosso, Nodes, structures, Voids
Roberto Bottazzi

roberto.bottazzi@ucl.ac.uk
Add to Issue
Read Article: 6952 Words

Introduction 

The work of Leonardo and Laura Mosso provides a very early and original application of computation to architectural, urban, and territorial design. Although computers were actually utilised to develop their ideas (a rare event in the 1960s in Italy), the work possessed conceptual and political ambitions that exceeded both the simple (or even fetishistic) fascination for a new technology and the functional approach that conceives computers as tools to efficiently complete tasks. Rather, the computer was part of a proto-ecological approach in which artificial and natural elements worked together towards the emancipation of the individual and their environment. At the centre of their research was “Architettura Programmata”, defined as a “theory of structural design” dedicated to the design of elements, their connections, as well as a higher, meta-system which we could call “structure” in the sense that Structuralism defined this word. Computers were involved in this project under both a design and an ethical agenda to understand and define “ecocybernetic dynamic as a structure for a self-evolved language of the environment and of the form at various levels of complexity, inserted in an unforeseen chain of self-evolved cybernetics: from political cybernetic to cybernetic of information, as integrated instruments of evolution in a condition of direct articulated democracy”.[1] 

This paper will discuss how computational thinking and computers were employed in the work and research of Leonardo and Laura Mosso, by analysing three paradigmatic projects which tackled the notion of structural design at different scales and contexts. The first project will be Cittá Programmata (1967-70), a theoretical proposal for a new type of city. The project represents the first actual use of computers in the work of Leonardo and Laura Mosso. The second example will concentrate on a piece of research on Piedmont territory – the place in which they operated throughout their academic and professional careers. Although computers were not directly employed to carry this research out, the approach to territorial analysis and planning employs a form of algorithmic thinking which impacts on both how the territory is read and how it could be re-imagined. Finally, the proposal for the restoration of block S.Ottavio in the historical centre of Turin shows a very innovative use of computers to intervene on historical artefacts of relevant cultural value, as well as the possibility to use computers to manage the future life of a building. 

Structuralism played an important part in the work of Laura and Leonardo Mosso, and it is an essential element in understanding their conceptualisation of structures and the role that design and computation had in it. A slightly left-field but very fruitful interpretation of Structuralism was produced by Gilles Deleuze in 1967, at the time Leonardo and Laura were intensifying their interest in computers.[2] Deleuze emphasised the role of emptiness, more precisely, of the “zero” sign as a mechanism for the transformation and articulation of structures. The notion of empty structure and zero offer a dynamic interpretation of Structuralism that is not only relevant to computational thinking, but can also clarify how the structures designed by Mosso can be understood as dynamic and adaptive.  

Early Experiments with Computers in 1960s Italy 

Before delving into the actual discussion, it will be useful to quickly sketch out some of the cultural trends operating in Italy in the 1960s to better contextualise how Leonardo and Laura Mosso arrived at their “Architettura Programmata”. 

“Architettura Programmata” directly refers to the exhibition “Arte programmata. Arte cinetica. Opere moltiplicate. Opera aperta” organised by Olivetti in 1962. The show was curated by Bruno Munari and Giorgio Soavi, with an accompanying catalogue edited by Umberto Eco. It displayed works by a series of artists, including Enzo Mari, who generated art procedurally, opening up a different mode of production and reception of works of art, also inspired by Eco’s Open Work.[3] In the same period, Nanni Balestrini was also experimenting with computers to generate poems.[4] These two examples are perhaps useful in helping to focus on some lesser-known aspects of Italian post-war culture, which is often mentioned for the work in cinema, architecture, art, but rarely for computation or scientific work in general. Along these lines, it is also worth mentioning the cybernetic group operating in Naples under the guidance of Prof. Eduardo Renato Caianiello, who maintained regular contact with MIT and Norbert Wiener. It is in this more international and open environment that we should position the research of Leonardo and Laura Mosso. 

Leonardo studied architecture in Turin, a very active city that led the Italian post-war economic boom, thanks to the presence of Fiat, the car manufacturer and one of the largest Italian factories. After graduating, Leonardo won a scholarship to study in Finland where, eventually, he started working in Alvar Aalto’s studio around 1958. From then on, he became the point of reference for most of the works that Aalto designed for Italy—such as the design of a residence for the Agnelli family (the owners of Fiat) and the Ferrero factory. A more international profile also characterised the figure of Guiseppe Ciribini, with whom both Laura and Leonardo also collaborated. Ciribini concentrated on the modernisation of the construction industry, focusing on prefabrication and modular design. His work was not limited to Italy and expanded to a European scale through his involvement with the European Coal and Steel Community (ECSC, or CECA in Italian, the precursor of the European Union) to devise international standards for prefabrication. Leonardo and Laura Mosso also established connections with Konrad Wachsmann, incidentally Giuseppe Ciribini’s predecessor at the Ulm School of Design, invited by Tomas Maldonado in 1958. Finally, Leonardo and Laura Mosso were also involved in the early experiments with computer art (which had developed in Croatia since the early 1960s) through the magazine New Tendencies.[5]  

In all these experiences, computation played an increasingly central role. In the case of Ballestrini or for the scientific research developed in Naples, computers were actually utilised, but in other cases, the work only consisted in speculation over what tasks and possibilities could be performed and unleashed. Leonardo and Laura Mosso are among the small group of architects and artists who did make use of computers in their work. With the help of Piero Sergio Rossatto and Arcangelo Compostella, two projects utilised computers to simulate and manage their transformations. Throughout almost two decades of using computers in their work, Leonardo and Laura Mosso developed an approach that was never guided by technocratic notions of efficiency. Rather, the philosophical implications of computing architecture and the political role that information and computation could have brought to a project and society in general constituted their main interest in this new technology. The computer as used in the Mossos’ work was in fact at the service of larger cultural project that aimed at distributing, rather than concentrating, power. Computers were an instrument for change, whereas the values of efficiency and sheer industrialisation appeared to be ways to fundamentally preserve the status quo, by simply making it run more smoothly. Rather than improving how architecture could better fulfil its role under the tenets of a capitalist, industrialised economy, Leonardo and Laura wanted to change the rules of the game itself; the computer, therefore, had to play an almost moral role in radically overturning the mechanisms regulating architecture and its use.  

Central to their research was the close relationship between philosophical ideas (Structuralism), design language (which particularly concentrated on discrete elements connected through reconfigurable, dynamic nodes), and computation. Leonardo and Laura Mosso’s approach to Structuralism was already open to dynamic, cybernetic influences and, for this reason, it may be interesting to read it against the famous writing that Gilles Deleuze dedicated to the same philosophical movement. 

The Dynamics of Structural Form 

Culturally, the post-war years were characterised by the diffusion, particularly in Italy and France, of Structuralism; generally understood as a philosophy of structures rather than functions. Structures could be organised in more general systems – of which natural language represented the most complex, paradigmatic example. Linguistics was indeed the domain of Structuralism, and the source from which most of its fundamental ideas were derived. From Saussure’s Course – indicated as the first structuralist text – to Barthes, Eco, Levi-Strauss, the Bourbaki group, Althusser, and also Foucault and Lacan, structuralist thinking extended beyond the linguistic domain to provide a framework to re-conceptualise other disciplines such as anthropology, psychoanalysis, mathematics, history, or politics. 

Broadly speaking, the definition of a structure consisted of two steps: the determination of its constituent parts (taxonomy) and the definition of the mechanisms that would govern the relations between parts and their transformation (grammar). Critics of Structuralism often rebuked this particular approach to structures for its excessive formalisation and the strictness of its deductive logic. Such criticism tended to depict structuralism as a mechanical, overly linear theory of systems, resulting from the perhaps excessive importance attributed to linguistics. Perhaps such characterisation of structuralism paid too little attention to the more transformative aspects of the theory: the dynamics of change and transformation. These are present in all the major structuralist thinkers; however, Gilles Deleuze provided an original overview that concentrated on the open, topological, and playful aspects of structures which is useful to briefly summarise here. In Deleuze’s “How Do We Recognize Structuralism?”,[6] originally written in 1967, Structuralism was detectable through six different criteria: symbol, local/positional, differential/singular, differentiation/differentiator, serial, and the empty square. Throughout the analysis, the emphasis is on transformation rather than permanence, on the mechanisms that guarantee a structure can operate by straddling between the real and the imaginary in order to transform reality and be transformed by it. 

We will return to Deleuze, particularly his understanding of the notion of “zero” which offers an interesting frame in which to conceptualise the role that structures played in the work of Leonardo and Laura Mosso – and, particularly, how physical construction nodes were instrumentalised to attain a structural language able to change and be appropriated (or “spoken”) by its users. Before dwelling further on this aspect of their work, it is important to point out that the work of Jean Piaget – an author often quoted in Leonardo’s and Laura’s writings – also offered a dynamic reading of structures and Structuralism in general. Laura and Leonardo often made use of Piaget’s characterisation of structures being composed of three main characteristics: wholeness, transformation, and self-regulation.[7] In Piaget’s work, we also we find an open, interactive, “proto-cybernetic”[8] reading of Structuralism marked by a relational understanding of the connections between environment, cognition, and symbols. Particularly, the notion of assimilation outlined by Piaget in The Construction of Reality in the Child[9] outlined a cognitive model based on continuous feedback between reality and the child’s development – an image that brought Structuralism much closer to cybernetics. An eco-cybernetic approach to planning was often also advocated by Laura and Leonardo. These initial definitions are helpful, not only in framing the work of the Mossos in relation to the cultural milieu in which they operated, but also in understanding how computation was conceptualised in their projects to translate notions of structure, node, transformation. 

As mentioned, Deleuze’s survey offers a particular vantage point to understand how Structuralism dealt with change and transformation, and how this can help to frame the role that structures and nodes have in the research of Laura and Leonardo Mosso. Deleuze dedicates particular attention to the notion of the “zero” sign in Structuralism; the “zero” sign is understood as an empty place in structure, determined positionally rather than semantically, that allows transformations to occur. The empty place in a structure guarantees the possibility of its transformation in a way which is analogous to the role of empty squares on a chess board. The structure is understood as a symbolic object. Symbols are here understood according to the definition provided by C.S. Pierce’s semiotics; that is, structures have an arbitrary character that does not attempt to find the essence of the object of investigation, but rather to construct it. In Deleuze’s words: “[the structure does not have] anything to do with an essence: it is more a combinatory formula [une combinatoire] supporting formal elements which by themselves have neither form, nor signification, nor representation, nor content, nor given empirical reality, nor hypothetical functional model, nor intelligibility behind appearances”.[10] The structure is always a third, encompassing element, beyond the real and the imaginary, that allows the structure “to circulate”. In other words, the elements of a structure can only be determined relationally, as “[they] have neither extrinsic designation, nor intrinsic signification”.[11] As the order of the structure is more important that its meaning, not only is the space (or spatium, as Deleuze refers to it) a central medium for the articulation of relations and transformations, but is best described topologically, in the sense that the function of such a spatium is to logically order elements so that specific, empirical objects can occupy the different squares of the structure. The final element to note in Deleuze’s analysis is the “wholly paradoxical object or element”,[12] that is, the connective element that allows different structures or series to communicate with and orient each other in order to perform on different levels, beyond the purely symbolic one. Such an element is defined by Deleuze as the “object = x”; the “zero” sign par excellence; the “eminently symbolic” object that injects dynamic qualities into structures and therefore allows them to work.  

Leonardo and Laura Mosso dedicated large parts of their architectural research to the roles that connecting elements, or nodes, had in articulating structures. Such research produced four different types of nodes which informed their work and can be seen at work in the three projects discussed in the second part of this paper. Deleuze’s consideration on structures help us frame the Mossos’ research as well. The node in a structure is the element that allows transformations to occur. Pieces can be detached, substituted, or removed according to the possibilities and constraints set by the node connecting them. There is therefore an analogy between the physical nodes of a structure and the mechanisms of transformations at work in the philosophical concept of structure. Borrowing from Deleuze’s description, the physical node becomes the “object = x”, the “zero” sign; that is, not simply the element that makes change possible, but also the element that is syntactically operative and open in order for meaning to emerge. The analogy between the two manifestations – physical and philosophical – structure is poignant to grasp the Mosso’s work: nodes are often literally “zero” signs, voids as in the case of the particular type of node developed for Cittá Programmata is literally organised around a void, an empty space. By straddling between its physical appearance and its philosophical interpretation, the node acts structurally, that is, beyond its purely empirical presence, the node is a device that orders physical elements logically. . In both accounts of structures, the minimal unit is the phoneme – “the smallest linguistic unit capable of differentiating two words of diverse meaning”[13] – which Leonardo and Laura put at the centre of their approach to structures by speaking of “phonetic” and “programmed structures”. This approach was already visible in the first example of “programmed architecture”, the Chapel for the Mass of the Artist in Turin (1961-63) in which a static node connected together 5cm x 5cm wooden studs to produce a highly varied pattern for the interior of the Chapel. In successive projects, nodes quickly grew in complexity in order to achieve more articulate and varied configurations, as well as allowing the end user and community to be able to adapt them for future uses. Such an architectural agenda demanded a new type of node that began to be articulated as a void, an “empty square” so to speak, around which the various elements aggregate (fig.). The morphology of this new type of node consisted of a virtual cube – a void – whose eight vertexes could be reconfigured around smaller voids, each able to link together four members. None of the members physically intersected (making the implementation of changes easier) and were organised around a series of voids of different sizes. These physical and conceptual voids held some analogies with the “object = x” Deleuze spoke of in regards to Structuralism; the final configuration was dynamic, a sort of system to let the structure circulate, to make transformation possible. In other words, such an approach to structure transformed the spatial model of representations from a strictly geometrical system to a topological one in which relations between objects took precedent over the presupposed semantic qualities.  

It is also along these lines that we can read the introduction of computation into the work of Leonardo and Laura Mosso. The computer became the perfect instrument to both manage the structural logic of the design and give it the political agency the two architects had been seeking through their notion of programmed architecture. The next section will analyse three paradigmatic projects in which the conceptual issues highlighted can be seen at work.  

Cittá Programmata, 1967-70. 

Cittá Programmata is one of the most iconic projects developed by Leonardo and Laura Mosso, a manifesto that encapsulates some of the key aspects of their work; that is, the potential for a structural approach to design to provide an environment for social and political self-determination. To implement their agenda of political and spatial self-determination, Leonardo and Laura introduced the computer, which represents the other radical aspect of this project. The computer played both an operational and a moral role in enabling the appropriation and transformation of the users’ habitat. Strictly speaking, the project consisted of a series of physical models and computer-generated drawings for an entire city and its possible transformations. The city was structured through a series of cubical modules (or “voxels”) of 6m x 6m x 0.5m that could co-evolve with the life of the city and its inhabitants, resulting (as the models and drawings showed) in an interrupted field of variously extruded elements, each composed by structural elements variously transformed.  

The research for Cittá Programmata took place in a rich cultural environment in which the work of Laura and Leonardo stood out for its original take on some of the topics that animated the architectural debate of the time. As mentioned, the post-war Italian scene was characterised by a growing importance of Structuralism in all aspects of culture. On the one hand, Structuralism guided the introduction of linguistics and semiotics as a general field of study, as well as their application to architectural and urban analysis. This line of inquiry sought to detect the underlying principles of architectural form in itself and in its relation with its context. At the other hand of the spectrum, a more pragmatic understanding of structural thinking was animating the debate on pre-fabrication and modular design, to renew the construction industry and fulfil the demand to modernise the Italian landscape. It is between these two main interpretations of the notion of structure in architecture that Cittá Programmata can be understood, as it proposes a different conception of language and structures. 

Leonardo and Laura Mosso saw in the semiotic approach to architecture an excessive interest in meaning, both in its relation to the internal history of architecture and in context. Against the backdrop of semantic studies on architecture, Cittá Programmata proposed a more structural approach to language and its formalisation; a “phonological” system that would enable its users to ‘speak’ their collective mind through the groups of structures the architects provided. Pre-fabrication, on the other hand, was indeed a rich field of investigation – as mentioned, Leonardo and Laura Mosso had been in close contact with Giuseppe Ciribini. However, prefabrication was committed to a model of society that privileged economic values (through the minimisation of costs, for instance) over political, cultural and social ones. Indirectly, their critique of pre-fabrication was also a critique of the notion of programme (“programma edilizio”), understood as an excessively functional approach to design. The brief – the document through which to implement a building programme – fixed the use of structures or, at best, described a limited number of activities that a piece of architecture could house over a limited period of time. The formalisation of such an approach to programme usually resulted in a neutral outcome which favoured the design of a generic spatial container which, in principle, could adapt to future needs. Leonardo and Laura critiqued this view of design both on the basis of the vagueness of the mechanisms for programmatic determination (future activities may be impossible to predict in advance) as well as for the generic architectural response. In opposition to it they proposed a structural approach that did offer implementable choices (as opposed to programmatic vagueness) and therefore was not limited to regulating quantitative growth, but could also take into account the qualitative aspects of spatial structures. Finally, programme was also critiqued from a political point of view, as it was identified as the political instrument that guaranteed an asymmetrical distribution of power between users and designers.  

Cittá Programmata imagined an environment in which the relation between users and architects was not hierarchically organised, but rather more radically and horizontally distributed. Here, both the programmatic and semantic critique that animated the Mossos’ approach converged. The aim to generate an environment based on a horizontal distribution of power called into question the role that semiotics could play in designing structures. The analogy proposed is once again with language. As for immaterial notions, language and architecture (understood as body of knowledge) are inherently public, they exceed anyone’s ability to claim ownership of them or control them. Both the linguist and the architect can only play with the systems of signs constituting their disciplines in order to make them public and accessible. Contrary to the semiotic studies of architecture which concentrate on the internal mechanisms and references of architectural language, Leonardo and Laura Mosso proposed a rather more “extroverted” approach interested in opening architecture up and inviting users to participate in the creation of their own environment. The architect was “at the service” of architecture, rather than a custodian of the arcane mechanisms of architectural language. In a way, we can say that the position taken was reminiscent of Saussure’s distinction between langue and parole: whereas semiotic studies in architecture appear to privilege the importance of the langue, in Cittá Programmata, Leonardo and Laura Mosso worked to maintain a dynamic relation between the two terms of Saussurean categorisation: 

Architecture, understood in a traditional sense, cannot be a language; that is, it cannot speak by itself. Similarly, we cannot say that the work of a linguist on language is a language … Architecture is at [the] service of language … in the same sense that a language services the community of speakers when it is spoken; that is, when architecture becomes “a system of transformations” or possibilities, from which it is possible to generate infinite messages. 

Mosso and Mosso[14] 

It is in this context that the computer was introduced, both to support the management of the city and to simulate its future configurations. The actual machine utilised was a Univac 1108 owned by the Politecnico of Milan and programmed by Piero Sergio Rossatto – an engineer and programmer at Olivetti – with Arcangelo Compostella. The stunning drawings generated by the Univac (now part of the Centre Pompidou’s permanent collection) showed the possible growth patterns generated from an arbitrary string of signs placed at the centre of the drawing. Two parallel lines of pre-allocated units (*) and voids (-) constituted the starting input for the simulation, which could either proceed in a sequential growth, on the basis of a probabilistic algorithm, (fig.XX) or randomly (fig.XX). The process of algorithmic growth did not take place in a vacuum, rather constraints could be programmed in making growth sensitive to contextual information. 

Landscape, Structure and History (1980-1986)  

A second type of node Leonardo and Laura Mosso had been working on were a kinetic, self-managed, and elastic universal structures(Strutture autogestibili e complessizzabili a giunto universale elastico). Since the beginning of the 1970s, as part of their research on the use of different types of nodes to articulate transformations in physical structures, they had been testing this particular type of node at different scales and in contexts. The research started with the academic work that Leonardo carried out with his students at the Politecnico in Turin, then through commissons such the “Red Cloud” (Nuvola Rossa), an installation completed in Carignano Palace in 1975 in which these nodes found one of their most convincing and poetically powerful applications. This large piece consisted of a complex structure made up of individual elements connected through elastic joints, which allowed the architects to build an undulating mesh suspended between the visitors and the frescos of the palace. These elastic structures were tested at different scales: for instance, between the end of the 1970s and the beginning of the 1980s, Laura and Leonardo would put their kinetic quality to the test by using them as props accompanying the movement of the bodies of contemporary dancers, both in their work with the Conservatorio G. Verdi in Turin (1978) and the performance staged in Martina Franca (1980). It is, however, the territorial scale which is of particular interest in this discussion, since it highlights an original understanding of how structures can perform algorithmically and because of the unusually large scale of this research.  

Here, particular reference is made to the research carried out between 1980 and 1981 under the broad agenda of “methodological work aiming at devising a system of signs to program both at the level of the territory and the city”.[15] The results of this methodological analysis of territorial structures would also inform a subsequent research project and exhibition titled “Landscape, Structure, and History”,[16] which tested their structural approach to territory on the local landscape of Piedmont, its rural cultures, and their relation with their surroundings, with a view to devising a strategy for preservation. Perhaps it might appear unusual for avant-garde architects to dedicate their research to the rural, historically-layered territory of Piedmont. On the contrary, local forward-thinking architects and engineers had already focused on vernacular architectural expressions in the local countryside: Carlo Mollino extensively studied and recorded examples of Alpine vernacular architecture in Valle D’Aosta, and Giuseppe Ciribini – whose work on the industrialisation of construction has already been mentioned – also paid attention to the spontaneous architecture of Alpine and pre-Alpine territories. Some of these interests in rural and vernacular architecture were gathered together by another Torinese architect, Giuseppe Pagano, in his famous exhibition “Continuity – Modernity”, in 1936, for the 6th Triennale in Milan.  

The Mossos’ research on territorial structures consisted of both drawings and physical models of specific areas of Piedmont (Canavese and Carignanese). The work mapped and recorded the landscape of Piedmont by positioning a series of kinetic structures over a map of the existing territory. The structures consisted of a series of elements connected through elastic, kinetic nodes that allowed each element complete freedom of rotation around each vertex. The final configuration of each structure emerged from the mediation between their internal properties (length of the elements, arrangement, type of nodes) and the cartographic representation of the landscape. The drawings took this relationship to more radical conclusions: the landscape was further abstracted and re-coded through a structural approach which adapted to different contexts. Rather than an image of a superstructure, the re-codification of the landscape through models and drawings struck a complex balance between the algorithmic approach and the context.  

In this particular project, structures are understood as organisation principles rather than physical constructions. Earlier, we spoke of a algorithmic use of structural thinking, a quick definition that requires unpacking. An algorithm is a set of instructions that, once applied to a set of input data, will perform a finite number of operations to return an output. Regardless of the complexity of the operations performed, an algorithm recodes the input data into a new set of data. Chomsky’s generative grammar, for instance, could be seen as a recursive (continuous) series of algorithms that rewrites any given statement of a natural language to produce new linguistic statements. The superimposition of Laura’s and Leornardo’s structures on a map of Piedmont countryside operated in a similar fashion and, therefore, could be interpreted as an algorithmic recoding of the territory. The input data was constituted by the information recorded in the cartographic representations of the landscape, whereas the kinetic structures acted as analogue algorithms that recoded the input data according to the vast (yet finite) number of configurations allowed by their physical characteristics (length and number of members, type of joints). In short, the physical structures deployed rewrote the landscape according to a precise set of rules; more poetically, we can say that the elastic node structure allowed the landscape to speak in the language of the structures superimposed onto it; an image that Laura Mosso also evoked when she wrote about developing methods to “make the structures whistle”.   

Contrary to stricter interpretations of Structuralism, the type of algorithmic approach proposed here was not merely deduced from internal, formal rules (that is, the physical constraints set by the elastic nodes); rather it emerged from a more iterative, open relationship with the context (abstracted through cartographic representations). The results of the process set up were particularly legible in the physical models: the kinetic structures made up of interconnected springs were laid out on the map to return a ‘structural re-reading’ of the landscape. A new, structural image of the territory emerged from the interaction between nodes and territory.  

The research on territories that Laura and Leonardo Mosso completed allows us to make a series of considerations on these algorithmic operations, their formal qualities, and the implications they give rise to. First, through a structural, algorithmic approach to territory, the research rejects distinctions between natural and artificial in favour of a more holistic approach to landscape – and yet, one describable through a set of finite operations. The constraints embodied in the physical structures do not decisively distinguish between artificial and natural, symbolic and productive, and thus support Leonardo and Laura Mosso’s call for the kind of expanded notion of ecology they had been advocating for, both in projects and publications (through, for instance, the publication titled La Nuova Ecologia). The structure is the symbolic device that catalogues and organises the whole of the territory (here understood as superseding dichotomies such as urban/rural, artificial/natural), establishing principles for its preservation and transformation Similarly, algorithmic re-writing provides a diachronic reading of the territory that is re-organised along structural rather than chronological vectors. The different nodes of the elastic structures are positioned on the map to establish connections between artifacts built in different times in order to give rise to new relations between them.  Finally, there is the function performed by the elastic structures as analogue algorithms. We have already seen how an algorithm can be understood as a form of rewriting and transformation of an existing condition (input data). The types of operations performed by an algorithm are always precise (determined by the rules programmed in the algorithm), executed in their entirety (the algorithm goes through all the steps scripted to return an output), and yet partial, as the algorithm can only survey a dataset according to the set of rules that form the algorithm itself. The constraints inbuilt in the elastic kinetic nodes allow them to only perform a vast, but finite set of movements; that is, only a subset of all the signs contained in the maps of Piedmont can be computed by the physical structures-algorithms. In short, an algorithm generates a specific representation of the object it is applied to.  

To better grasp this last point, we can draw an analogy between real objects (such as buildings) and their orthographic representation. For instance, a section through a building can only return a partial image of the object it investigated, and yet how a section is drawn follows precise and rigorous rules that determine what and how the building will be captured in the section. But the section is a sign-object, not a building; it elicits further manipulations by either applying different sets of criteria (e.g., by concentrating on the structural, programmatic, material qualities of the building) or by changing the very parameters that generated it (changing the position of the section plane or the conventions applied). The approach developed to the Piedmont territory by Leonardo and Laura Mosso makes aspects of this landscape intelligible through the production of new signs which, in turn, make it amenable to further manipulations. It is important to notice that all operations performed by Laura and Leonardo are performed on a cartographic representation of the territory; photographs and other cultural aspects of the areas such as the name of places are complementary, rather than primary, information. Cartography is itself a coded, notational (rather than mimetic) representation of the territory. As a medium it therefore lends itself to the operations of re-coding and re-writing, since it is already a semiotic system; on the other hand, it acts as a recipient of the new codification of the landscape generated through a structural reading. 

Finally, the structure-algorithm becomes a marker of change, as the instrument through which modifications, and, in general, any metamorphic transformation of the territory can be foregrounded, read, and made tractable in order to preserve it or alter it. The research developed by Laura and Leornardo Mosso shows that a structural approach through algorithmic thinking should not only be confined to new, pristine domains, but can also offer innovative ways to interpret and intervene in historical contexts. The last project discussed – the proposal for S. Ottavio block in the historical centre of Turin – will further reinforce this point.  

S. Ottavio Block, Turin, 1980 

The commission for a study of the block located in the historic centre of Turin was received in 1978 and became an important, yet entirely forgotten chapter in the story of both Leonardo and Laura Mosso’s production and the integration of digital technologies in architecture. On the one hand, the brief for the project was a rather common one for Italian architects, whose practice often confronted (and still confronts) historical artefacts. Leonardo and Laura, however, saw in this commission an opportunity to advance their research on structures as well as on the use of computational tools. For purposes of simplicity, we can artificially divide the project between the physical proposed interventions and the immaterial, data-driven ones. 

The physical restoration of the block consisted in a series of more traditional interventions to reinforce the old brick walls, as well as the insertion of new levels to convert the existing spaces into inhabitable housing units. The new structures in steel and wood were elegantly laid out at a 45-degree angle, to mark a clear distinction between pre-existing and new elements. The type of node deployed in this instance was also a dynamic one, however, the only permissible movement was to slide along one of the orthogonal directions of the structure. Though the dynamics of nodes were limited (in comparison to the conceptual experiments at territorial scale), they allowed users to alter and self-organise their habitat. By deploying the same type of node at different scales and through different materials (aluminium, wood, and plexiglass), users could appropriate the environment both at the architectural and interior scale.  

Perhaps the most radical proposal of this research was the organisation of the conceptual side of the project. A computerised system was going to be set up to monitor and maintain the block. A proto-digital twin, the system would map all the elements of the project and generate a database in order for both individual users and the municipality to control, repair and maintain the whole block. For the programming of the whole system, Piero Sergio Rossatto – who had worked with Laura and Leonardo for the Citta Programmata – was consulted. The spatial representation of the block in the digital model followed the logic of voxels: a three-dimensional grid of individual cubes that provided a system of coordinates to locate every element of the project, existing or proposed, architectural or infrastructural. In Rossatto’s scheme, the project would be surveyed starting from the ground level (z=0 in the digital model) and gradually moving towards the roof by increasing the z-value in the voxel grid. Every intersection between the voxel grid and an element of the project would be recorded.  

Although the project was not well received by the local administration that could not fully grasp the innovative approach, eventually shying away from a unique opportunity to radically rethink the relation between digital technologies and historical artefacts, the project illustrated a different, complementary fact of Leonardo and Laura Mosso’s approach to algorithmic form. 

As mentioned, the project applied digital technologies to pre-existing architectural artefacts protected by preservation laws. Whereas digital technologies are invariably understood as the instrument to deliver the “new” or the “radically different”, or even to make a tabula rasa of pre-existing notions, this project showed a more nuanced, and yet still radical side of digital technologies, which could coexist with and complement the delicate pattern of a historical city.  

The structural approach, which continuously developed throughout several decades of research, here resulted in an abstract grid – a field of voxels, to be precise – that acted as a monitoring system allowing users to appropriate and control their own habitat. In the course of their research, Leonardo and Laura developed a physical model of the virtual voxel field that did not include any of the physical structures designed. The model possessed a very strong sculptural quality, but, most importantly, also showed the power of the algorithmic approach they had developed. On the one hand (and similarly to the experiments carried out in coding the Piedmont territory), the logic of the structure not only enabled its own transformation, but also determined its aesthetic qualities. The algorithmic logic guiding its own re-writing (in this case represented by the rhythm of the voxel field) returned a new type of form; an algorithmic form. As the model clearly showed, the logic of the voxel field implied a space without discontinuities or interruptions; saturated with data, the model was “all full” (as Andrea Branzi would have it), a solid block of data. As such, the research and proposal for the S.Ottavio block represents one of the earliest attempts to think of design straddling between physical and digital environments – a concept that could only be implemented through a structural approach to design whose robustness would allow it to extend to immaterial representations of space.  

Conclusions 

The work of Leonardo and Laura Mosso not only constitutes an excellent example of very early work with computers in architecture, but also provides a rich framework through which to problematise the issue of algorithmic form. The close relationship between design, philosophy, technology and politics not only forms a complex and rich agenda, but also expands the use of computers in design well beyond a functional focus on increasing efficiency and profits. Perhaps, this is one of the aspects of their work that still resonates with contemporary research on algorithmic design: the complex relationship between ideas and techniques, and the use of computation as an instrument for change. Computation was more than a vehicle to implement their radical design agenda, it was also tasked with implementing specific ethical values by orchestrating the interaction between architects, users, and built environment. In many ways, computation, and the algorithmic forms it engendered, was utilised by the Mossos to perform one of its original and most enduring tasks: to logically order things and, therefore, to conjure up an image of a future society.  

In memory of Leonardo Mosso 1926-2020.  

References 

[1] L. Mosso & L. Mosso, (1972). “Self-generation of form and the new ecology”. In Ekistics – Urban Design: The people’s use of urban space, vol.34, no.204, pp.316-322. 

[2] Deleuze’s text on Structuralism, however, was only published in 1971, so the connection between the two architects and the French philosopher is coincidental.  

[3] U. Eco, The Open Work, Translated by A. Cancogni. 1st Italian edition published in 1962. (Cambridge, Mass: Harvard University Press, 1989). 

[4] R. Bottazzi, Digital Architecture Beyond Computers: Fragments of a Cultural History of Computational Design (London: Bloomsbury Visuals, 2018). 

[5] L. Mosso & L. Mosso, “Computers and Human Research: Programming and self-Management of Form”, A Little-Known Story about a Movement, a Magazine, and the Computer’s Arrival in Art: New Tendencies and Bit International 1961-1973, edited by M. Rosen. (Karlsruhe, Germany: ZKM/Center for Art and Media; Cambridge, MA: MIT Press, 2011) 427-431. 

[6] G. Deleuze, “How Do We Recognize Structuralism?”, Desert Islands and Other Texts 1953-1974, Ed. D. Lapoujade, transl. by M. Taormina. (Los Angeles, CA: Semiotexte, 2004). Originally published in F. Chatelet (ed.)  Histoire de la philosophie vol. VIII: Le XXe Siècle. (Prasi: Hachette, 1972), 299-335. 

[7] J. Piaget, Structuralism. Translated and edited by C. Maschler. (London: Routledge and Kegan, 1971, 1st edition 1968). 

[8] E. Von Glaserfeld, “The Cybernetic Insights of Jean Piaget”, Cybernetics & Systems, 30, 2 (1999) 105-112. 

[9] J. Piaget, The Construction of Reality in the Child (New York: Basic Books, 1954; 1st Edition Neuchâtel, Switzerland: Delachaux et Nestlé, 1937) 

[10] G. Deleuze, “How Do We Recognize Structuralism?”, Desert Islands and Other Texts 1953-1974, Ed. D. Lapoujade, transl. by M. Taormina. (Los Angeles, CA: Semiotexte, 2004). Originally published in F. Chatelet (ed.)  Histoire de la philosophie vol. VIII: Le XXe Siècle. (Prasi: Hachette, 1972), 173 

[11] Ibid., 173 

[12] Ibid., 184 

[13] Ibid., 176 

[14] L. Mosso & L. Mosso, “Architettura Programmata e Linguaggio”, La Sfida Elettronica: realtá e prospettive dell’uso del computer in architettura (Bologna: Fiere di Bologna, 1969) 130-137. 

[15] L. Baccaglioni, E. Del Canto & L. Mosso, Leonardo Mosso, architettura e pensiero logico. Catalogue to the exhibition held at Casa del Mantegna, Mantua (1981). 

[16] L. Castagno & L. Mosso, ed. Paesaggio, struttura e storia: itinerari dell’architettura e del paesaggio nei centri storici della Provincia di Torino Canavese e Carignanese. (Turin: Provincia di Torino Assesorato all Cultura, Turismo e Sport, 1986). 

Suggest a Tag for this Article
Collage of Isa Genzken's work
Collage of Isa Genzken’s work
The Algorithmic Form in Isa Genzken
Algorithmic Form, assemblage, attention economy, Collage, data architecture, hooks, Isa Genzken, montage, Social Architecture, social object, social science, surrealism
Provides Ng

provides.ng.19@ucl.ac.uk
Add to Issue
Read Article: 4219 Words

What’s the Hook? Social Architecture? 

Isa Genzken’s work can be seen as a synthesis of the “social” and the “object” – a visual-sculptural art that reflects on the relationship between social happenings and the scale of architectural space. She was also one of the early explorers in the use of computation for art, collaborating with scientists in the generation of algorithmic forms in the 70s. But what is the social object? What can it mean for architecture? Just as Alessandro Bava, in his “Computational Tendencies”,[1] challenged the field to look at the rhythm of architecture and the sensibility of computation, Roberto Bottazzi’s “Digital Architecture Beyond Computers”[2] gave us a signpost: the urgency is no longer about how architectural space can be digitised, but ways in which the digital space can be architecturised. Perhaps this is a good moment for us to learn from art; in how it engages itself with the many manifestations of science, while maintaining its disciplinary structural integrity. 

Within the discipline of architecture, there is an increasing amount of research that emphasises social parameters, from the use of big data in algorithmic social sciences to agent-based parametric semiology in form-finding.[3] [4] The ever-mounting proposals that promise to apply neural networks and other algorithms to [insert promising architectural / urban problem here] is evidence of a pressure for social change, but also of the urge to make full use of the readily available technologies at hand. An algorithm is “a process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer”.[5] It is a finite sequence, well-defined, with performance based on the length of code – how fast and best can we describe the most. In 1975, Gregory Chaitin’s formulation of Algorithmic Information Theory (AIT) reveals that the algorithmic form is not anymore what can be visualised on the front-end, but “the relationship between computation and information of computably generated objects, such as strings or any other data structure”.[6] In this respect, what stands at the convergence of computable form and the science of space is the algorithmic social object. 

Figure 1 – Algorithmic Social Science Research Unit (ASSRU) and Parametric Semiology – The Design of Information Rich Environments. Image source: ASSRU, Patrik Schumacher.  

Social science is the broad umbrella that encompasses disciplines from history and economics, to politics and geography; within which, sociology is a subset that studies the science of society.[7] The word ‘sociology’ is a hybrid, coined by French philosopher Isidore Auguste Comte in 1830 “from Latin socius ‘associate’ + Greek-derived suffix –logie”; more specifically, “social” as the adjective dates from the 1400s, meaning “devoted to or relating to home life”; and 1560s as “living with others”.[8] The term’s domestic connotation soon accelerated from the realm of the private to the public: “Social Contract” from translations of Rousseau in 1762; “Social Darwinism” and “Social Engineering” introduced by Fisher and Marken in 1877 and 1894; “Social Network” and “Social Media” by the late 20th century from Ted Nelson. Blooming during a high time of the Enlightenment and the rise of the positivist worldview, sociology naturally claims itself to be a science, of scientific methods and empirical investigations. The connotation of –logie has been brilliantly attested by Jonathan Culler:[9] 

Traditionally, Western philosophy has distinguished ‘reality’ from ‘appearance’, things themselves from representations of them, and thought from signs that express it. Signs or representations, in this view, are but a way to get at reality, truth, or ideas, and they should be as transparent as possible; they should not get in the way, should not affect or infect the thought or truth they represent.” 

To claim a social study as a science puts forward the question of the relationship between the language that is used to empirically describe and analyse the subject with the subject matter itself. If it should be objectively and rationally portrayed, then the language of mathematics would seem perfect for the job. If we are able to describe the interaction between two or more people using mathematics as a language, then we may begin to write down a partial differential equation and map the variables of it.[10] Algorithms that are inductively trained on evidence-based data do not only seem to capture the present state of such interaction, but seem also able to give critical information in describing the future evolution of the system. This raises the question of computability: what is the limit to social computation? If there is none, then we might as well be a simulation ourselves; so the logic goes that there must be one. To leave an algorithm running without questioning the limits to social computation is like having Borel’s monkey hitting keys at random on a typewriter, or to apply [insert promising algorithm here] arbitrarily for [insert ear-catching grand challenges here].   

Figure 2– Borel’s infinite monkey theorem in 1913. Image source: Wikipedia. 

What’s the hook? 

A hook “is a musical idea, often a short riff, passage, or phrase, that is used in popular music to make a song appealing and to catch the ear of the listener”.[11] It is a monumental part of Web 2.0 that takes user attention as a scarce resource and a valuable commodity – an attention economy. Music is an artform that takes time to comprehend; as it plays through time, it accrues value in your attention.  

Figure 3 – Drum beat to Empire State of Mind, Nick’s Drum Lessons, “‘Empire State of Mind’ Jay Z – Drum Lesson”, October 5, 2014 

This is one of the most famous hooks of the late 2000s – Empire State of Mind came around the same time as the Web 2.0 boom, just after New York had recovered from the dotcom bubble. The song was like an acoustic montage of the “Eight million stories, out there in the naked’, revealing an underlying urge for social change that was concealed by the boom; just as we see Jay-Z in Times Square on stage under the “big lights that inspired” him rapping: “City is a pity, half of y’all won’t make it”.[12] It was an epoch of R&B, rhythms of cities, of the urban sphere, of the hightech low life. Just the first 15 seconds of Jay-Z’s beat is already enough to teleport a listener to Manhattan, with every bit of romanticism that comes with it. The Rhythms and the Blues constructed a virtual space of narrative and story-telling; such spatial quality taps into the affective experiences of the listener through the ear, revealing the urban condition through its lyrical expression. It is no accident that the 2000s was also a time when the artist / sculptor Isa Genzken began exploring the potential of audio in its visual-sculptural embodiment.  

The ear is uncanny. Uncanny is what it is; double is what it can become; large [or] small is what it can make or let happen (as in laisser-faire, since the ear is the most [tender] and most open organ, the one that, as Freud reminds us, the infant cannot close); large or small as well the manner in which one may offer or lend an ear.” — Jacques Derrida.[13] 

Figure 4 – “Ohr”, Isa Genzken, since 2002, Innsbruck, City Hall facade, large format print on flag fabric, 580 x 390 cm. Photograph, galeriebuchholz 

An image of a woman’s ear was placed on a facade by Genzken, personifying the building as a listener, hearing what the city has to say. At the same time, “The body is objectified and made into a machine that processes external information”.[14] The ear also symbolises the power of voice that could fill a place with a space: an acoustic space. As much as a place is a location, geographically tagged, and affects our identity and self-association of belonging; a space can be virtual as much as it can be physical. Such a space of social interaction is now being visualised on a facade, and at the same time, it is being fragmented: “To look at a room or a landscape, I must move my eyes around from one part to another. When I hear, however, I gather sound simultaneously from all directions at once: I am at the centre of my auditory world, which envelopes me. … You can immerse yourself in hearing, in sound. There is no way to immerse yourself similarly in sight”.[15] This is perhaps a prelude to augmented virtual reality.  

Figure 5 – The Surrealist doctrine of dislocation, the romantic encounter of urban objects is “as beautiful as the chance meeting of a sewing machine and an umbrella on an operating table.” – Lautréamont, Canto VI, Chapter 3. (a) The cover of the first edition of the Rem Koolhaas’ book Delirious New York, designed by Madelon Vriesendorp. (b) A photograph of New York by Isa Genzken, New York, N.Y., 1998/2000, Courtesy Galerie Buchholz, Berlin/Cologne. (c) A photography by Man Ray 1935 © The Man Ray Trust / ADAGP, Paris and DACS, London 

As much as Genzken is interested in the ‘‘exploration of contradictions of urban life and its inherent potential for social change”, Rem Koolhaas shared a similar interest in his belief that it is not possible to live in this age if you don’t have a sense of many contradictory voices.[16] [17] What the two have in common is their continental European roots and a love for the Big Apple – Genzken titled her 1996 collage book “I Love New York, Crazy City”, and with it paid homage to her beloved city. Delirious New York was written at a time when New York was on the verge of bankruptcy, yet Koolhaas saw it as the Rosetta Stone, and analysed the city as if there had been a plan, with everything starting from a grid. It was Koolhaas’ conviction that the rigor of the grid enabled imagination, despite its authoritative nature: unlike Europe, which has many manifestos with no manifestation, New York was a city with a lot of manifestation without manifesto. 

Koolhaas’ book was written with a sense of “critical paranoia” – a surrealist approach that blends together pre-existing conditions and illusions to map the many blocks of Manhattan into a literary montage. The cover of the first edition of the book, designed by Madelon Vriesendorp, perfectly captures the surrealism of the city’s socio-economy at the time: the Art Deco skyscraper Chrysler Building is in bed with the Empire State. Both structures were vying for distinction in the “Race into the Sky” of the 1920s, fueled by American optimism, a building boom, and speculative financing. [18] Just as the French writer Lautréamont wrote: “Beautiful as the accidental encounter, on a dissecting table, of a sewing machine and an umbrella”, surrealism is a paradigmatic shift of “a new type of surprising imagery replete with disguised sexual symbolism”[19] The architectural surrealism manifested in this delirious city is the chance encounter of capital, disguised as national symbolism – an architectural hook.  

Data Architecture 

Figure 6 – China Central Television Headquarters (CCTV) and Genzken’s Gate for Amsterdam Tor für Amsterdam, Außenprojekte, Galerie Buchholz, 1988.

Genzken’s sense of scale echoes Koolhaas’ piece on “bigness” in 1995. Her proposal for the Amsterdam City Gate frames and celebrates the empty space, and found manifestation in Koolhaas’ enormous China Central Television’s (CCTV) Beijing headquarters – a building as a city, an edifice of endless air-conditioning and information circularity wrapped in a structured window skin, hugging itself in the air by its downsampled geometry of a mobius loop. Just as Koolhaas pronounced, within a world that tends to the mega, “its subtext is f*** context”. One is strongly reminded of the big data approach to form-finding, perhaps also of the discrete spatial quality coming from Cellular Automata (CA), where the resolution of interconnections and information consensus fades into oblivion, turning data processing into an intelligent, ever mounting aggregation. In the big data–infused era, the scale boundary between architecture and urban design becomes obscured. This highlights our contemporary understanding of complex systems science, where the building is not an individual object, but part of a complex fabric of socioeconomic exchanges. 

Figure 7 – The Bartlett Prospective (B-pro) Show, 2017. 

As Carpo captured in his Second Digital Turn, we are no longer living in Shannon’s age, where compression and bandwidth is of highest value: “As data storage, computational processing power, and retrieval costs diminish, many traditional technologies of data-compression are becoming obsolete … blunt information retrieval is increasingly, albeit often subliminally, replacing causality-driven, teleological historiography, and demoting all modern and traditional tools of story-building and story-telling. This major anthropological upheaval challenges our ancestral dependance on shared master-narratives of our cultures and histories”.[20] Although compression as a skillset is much used in the learning process of the machines for data models, from autoencoders to convolutional neural networks, trends in edge AI and federated learning are displacing value in bandwidth with promises of data privacy – we no longer surrender data to a central cloud, instead, all is kept on our local devices with only learnt models synchronising. 

Such displacement of belief in centralised provisions to distributed ownership is reminiscent of the big data-driven objectivist approach to spatial design, which gradually displaces our faith in anything non-discursive, such as norms, cultures, and even religion. John Lagerwey defines religion in its broadest sense as the structuring of values.[21] What values are we circulating in a socio-economy of search engines and pay-per-clicks? Within trends of data distribution, are all modes of centrally-provisioned regulation and incentivisation an invasion of privacy? Genzken’s work in urbanity is like a mirror held up high for us to reflect on our urban beliefs.  

Figure 8 – Untitled, Isa Genzken  2018, MDF, brass fixings, paper, textiles, leather, mirror foil, tape, acrylic paint, mannequin, 319.5 x 92.5 x 114 cm. David Zwirner, Hong Kong, 2021.

Genzken began architecturing a series of “columns” around the same time as her publication of I Love New York, Crazy City. Evocative of skyscrapers and skylines that are out of scale, she named each column after one of her friends, and decorated them with individual designs, sometimes of newspapers, artefacts, and ready-made items that reflect the happenings of the time. Walking amongst them reminds the audience of New York’s avenues and its urban strata, but at 1:500. Decorated with DIY store supplies, these uniform yet individuated structures seem to be documenting a history of the future of mass customization. Mass customisation is the use of “flexible computer-aided manufacturing systems to produce custom output. Such systems combine the low unit costs of mass production processes with the flexibility of individual customization”.[22] As Carpo argued, mass customisation technologies would potentially make economies-of-scale and their marginal costs irrelevant and, subsequently, the division-of-labour unnecessary, as the chain of production would be greatly distributed.[23] The potential is to democratise the privilege of customised design, but how can we ensure that such technologies would benefit social goals, and not fall into the same traps of the attention economy and its consumerism?  

Refracted and reflected in Genzken’s “Social Facades” – taped with ready-made nationalistic pallettes allusive of the semi-transparent curtain walls of corporate skyscrapers – one sees nothing but only a distorted image of the mirrored self. As the observer begins to raise their phone to take a picture of Genzken’s work, the self suddenly becomes the anomaly in this warped virtual space of heterotopia.  

Utopia is a place where everything is good; dystopia is a place where everything is bad; heterotopia is where things are different – that is, a collection whose members have few or no intelligible connections with one another.” — Walter Russell Mead [24] 

Genzken’s heterotopia delineates how the “other” is differentiated via the images that have been consumed – a post-Fordist subjectivity that fulfils itself through accelerated information consumption.  

Figure 9 – Attention economy and social strata as refracted and reflected in (a) “Soziale Fassade”, Isa Genzken, 2002, Courtesy Galerie Buchholz, Berlin/Cologne, and (b) “I shop therefore I am”, Barbara Kruger, 1987 

The Algorithmic Form 

Genzken’s engagement with and interest in architecture can be traced back to the 1970s, when she was in the middle of her dissertation at the academy.[25] She was interested in ellipses and hyperbolics, which she prefers to call “Hyperbolo”.[26] The 70s were a time when a computer was a machine that filled the whole room, and to which a normal person would not have access. Genzken got in touch with a physicist, computer scientist Ralph Krotz, who, in 1976, helped in the calculation of the ellipse with a computer, and plotted the draft of a drawing with a drum plotter that prints on continuous paper.[27] Artists saw the meaning in such algorithmic form differently than scientists. For Krotz, ellipses are conic sections. Colloquially speaking, an egg comes pretty close to an ellipsoid: it is composed of a hemisphere and half an ellipse. If we are to generalise the concept of conic section, hyperbolas also belong to it: if one rotates a hyperbola around an axis, a hyperboloid is formed. Here, the algorithmic form is being rationalised to its computational production, irrelevant of its semantics – that is, until it was physically produced and touched the ground of the cultural institution of a museum. 

The 10-meter long ellipse drawing was delivered full size, in one piece, as a template to a carpenter, who then converted it to his own template for craftsmanship. Thus, 50 years ago, Genzken’s work explored the two levels of outsourcing structure symbolic of today’s digital architectural production. The output of such exploration is a visual-sculptural object of an algorithmic form at such an elongated scale and extreme proportion that it undermines not only human agency in its conception, but also the sensorial perception of 2D-3D space.[28] When contemplating Genzken’s Hyperbolo, one is often reminded of the radical play with vanishing points in Hans Holbein’s “The Ambassadors”, where the anamorphic skull can only be viewed at an oblique angle, a metaphor for the way one can begin to appreciate the transience of life only with an acute change of perspective.  

Figure 10. (a) ‘The Ambassadors’, Hans Holbein, 1533. (b) “Hyperbolos”, Genzken, 1970s. Image source: Andrea Albarelli, Mousse Magazine

When situated in a different context, next to Genzken’s aircraft windows (“Windows”), the Hyperbolo finds association with other streamlined objects, like missiles. Perhaps the question of life and death, paralleling scientific advancement, is a latent meaning and surrealist touch within Genzken’s work, revealing how the invention of the apparatus is, at the same time, the invention of its causal accidents. As the French cultural theorist and urbanist Paul Virilio puts it: the invention of the car is simultaneously the invention of the car crash.[29] We may be able to compute the car as a streamlined object, but we are not even close to being able to compute the car as a socio-cultural technology.  

Figure 11 – Genzken holding her “Hyperbolos” in 1982, and “Windows”. Eichler , Dominic. “This Is Hardcore.” Frieze, 2014.

Social Architecture? 

Perhaps the problem is not so much whether the “social” is computable, but rather that we are trying to objectively rationalise something that is intrinsically social. This is not to say that scientific methods to social architecture are in vain; rather the opposite, that science and its language should act as socioeconomic drivers to changes in architectural production. What is architecture? It can be described as what stands at the intersection of art and science – the art of the chief ‘arkhi-’ and the science of craft ‘tekton’ – but the chance encounter of the two gives birth to more than their bare sum. If architecture is neither art nor science but an emergence of its own faculty, it should be able to argue for itself academically as a discipline, with a language crafted as its own, and to debate itself on its own ground – beyond the commercial realm that touches base with ground constraints and reality of physical manifestation, and also in its unique way of researching and speculating, not all “heads in the clouds”, but in fact revealing pre-existing socioeconomic conditions.  

It is only through understanding ourselves as a discipline that we can begin to really grasp ways of contributing to a social change, beyond endlessly feeding machines with data and hoping it will either validate or invalidate our ready-made and ear-catching hypothesis. As Carpo beautifully put it:  

Reasoning works just fine in plenty of cases. Computational simulation and optimization (today often enacted via even more sophisticated devices, like cellular automata or agent-based systems) are powerful, effective, and perfectly functional tools. Predicated as they are on the inner workings and logic of today’s computation, which they exploit in full, they allow us to expand the ambit of the physical stuff we make in many new and exciting ways. But while computers do not need theories, we do. We should not try to imitate the iterative methods of the computational toolds we use because we can never hope to replicate their speed. Hence the strategy I advocated in this book: each to its trade; let’s keep for us what we do best.” [30] 

References

1 A. Bava, “Computational Tendencies – Architecture – e-Flux.” Computational Tendencies, January. 2020. https://www.e-flux.com/architecture/intelligence/310405/computational-tendencies/.

2 R. Bottazzi, Digital Architecture beyond Computers Fragments of a Cultural History of
Computational Design (London: Bloomsbury Visual Arts, 2020).

3 ASSRU, Algorithmic Social Sciences, http://www.assru.org/index.html. (Accessed December 18, 2021)

4 P. Schumacher, Design of Information Rich Environments, 2012.
https://www.patrikschumacher.com/Texts/Design%20of%20Information%20Rich%20Environments.html.

5 Oxford, “The Home of Language Data” Oxford Languages, https://languages.oup.com/ (Accessed December 18, 2021).

6 Google, “Algorithmic Information Theory – Google Arts & Culture”, Google,
https://artsandculture.google.com/entity/algorithmic-information-theory/m085cq_?hl=en. (Accessed December 18, 2021).

7 Britannica, “Sociology”, Encyclopædia Britannica, inc. https://www.britannica.com/topic/sociology. (Accessed December 18, 2021).

8 Etymonline, “Etymonline – Online Etymology Dictionary”, Etymology dictionary: Definition, meaning and word origins, https://www.etymonline.com/, (Accessed December 18, 2021).

9 J. Culler, Literary Theory: A Very Short Introduction, (Oxford: Oxford University Press, 1997).

10 K. Friston, ”The free-energy principle: a unified brain theory?“ Nature reviews neuroscience, 11 (2),127-138. (2010)

11 J. Covach, “Form in Rock Music: A Primer” (2005), in D. Stein (ed.), Engaging Music: Essays in Music Analysis. (New York: Oxford University Press), 71.

12 Jay-Z. Empire State Of Mind, (2009) Roc Nation, Atlantic

13 J. Derrida, The Ear of the Other: Otobiography, Transference, Translation ; Texts and Discussions with Jacques Derrida. Otobiographies / Jacques Derrida, (Lincoln, Neb.: Univ. of Nebraska Pr., 1985).

15 Kunsthalle Wien, “Kunsthalle Wien #FemaleFool Booklet I’m Isa Genzken the …,” (2014). https://kunsthallewien.at/101/wp-content/uploads/2020/01/booklet_i-m-isa-genzken-the-only-female-fool.pdf?x90478.

16 W. Ong, Orality and Literacy: The Technologizing of the Word, (London: Methuen, 1982)

17 R. Koolhaas, New York délire: Un Manifeste rétroactif Pour Manhattan, (Paris: Chêne, 1978).

18 Kunsthalle Wien, “Kunsthalle Wien #FemaleFool Booklet I’m Isa Genzken the …,” (2014). https://kunsthallewien.at/101/wp-content/uploads/2020/01/booklet_i-m-isa-genzken-the-only-female-fool.pdf?x90478.

19 J. Rasenberger, High Steel: The Daring Men Who Built the World’s Greatest Skyline, 1881 to the Present, (HarperCollins, 2009)

20 Tate, “’L’Enigme D’Isidore Ducasse’, Man Ray, 1920, Remade 1972”, Tate. https://www.tate.org.uk/art/artworks/man-ray-lenigme-disidore-ducasse-t07957, (Accessed December 18, 2021)

21 M. Carpo, ”Big Data and the End of History”. International Journal for Digital Art History, 3: Digital Space and Architecture, 3, 21 (2018)

22 J. Lagerwey, Paradigm Shifts in Early and Modern Chinese Religion a History, (Boston, Leiden: Brill, 2018).

23 Google, “Mass Customization – Google Arts & Culture.” Google, https://artsandculture.google.com/entity/mass-customization/m01k6c4?hl=en (Accessed December 18, 2021).

24 M. Carpo, The Second Digital Turn: Design beyond Intelligence, (Cambridge: MIT, 2017).

25 W.R. Mead, (Winter 1995–1996). “Trains, Planes, and Automobiles: The End of the Postmodern Moment”. World Policy Journal. 12 (4), 13–31

26 U. Loock, “Ellipsoide und Hyperboloide”, in Isa Genzken. Sesam, öffne dich!, exhibition cat. (Whitechapel Gallery, London, and Museum Ludwig, Cologne: Kasper, 2009)

27 S. Baier, “Out of sight”, in Isa Genzken – Works from 1973-1983, Kunstmuseum

28 R. Krotz, H. G. Bock, “Isa Genzken”, in exhibition cat. Documenta 7, Kassel 1982, vol. 1, p. 330-331, vol. 2, p. 128-129

29 A. Farquharson, “What Architecture Isn’t” in Alex Farquharson, Diedrich Diederichsen and Sabine Breitwieser, Isa Genzken (London 2006), 33

30 P. Virilio, Speed and Politics: An Essay on Dromology (New York: Columbia University, 1986).

Suggest a Tag for this Article
Figure 1 - Sea of Digital Models @FONDAMENTA
Figure 1 – Sea of Digital Models @FONDAMENTA
Fondamenta
architectural language, BIM, Building Information Modelling, construction, Fondamenta, Generalist Architect
Office Fondamenta

mail@fondamenta.archi
Add to Issue
Read Article: 2397 Words

The following piece is transcribed from Fondamenta’s talk at the B-pro Open Seminar that took place at the Bartlett School of Architecture on the 8th December, 2021.

Figure 1 – Sea of Digital Models, FONDAMENTA

We are interested in the construction of spaces with a strong belief in research and experimentation, where building is the end to which architecture must strive to become itself, and technology is the tool used to reach this result. We question conventions and support contradictions; fascination for structure, and freedom from dogma are the premises of this research. Structure is the trace of space, it organises the program and generates the building. Governance through technology is the key to the creation of an architectural organism, we see our projects as opportunities to conduct research on structural systems and the use of materials. We push materials to and against their limits – we are into designing through a systematic approach, relative to structures, without forgetting that the ultimate user of this organism is the human being; we are glad to have seen four very interesting presentations. We connect with the work of Luigi Moretti a lot, who we deeply admire as an architect, being one of the first pioneers in understanding spaces as organisms, creating them with a scientific logic and having developed four precise categories to design them.

What is technology for us? It is an instrument that we face daily, we use technology to follow our purpose, and to reaffirm the central role of the Architect in the building process. Technology drives efficiency, precision and control through the entire process, allowing governance of the economy of the project. The central issue of the use of technology is always about WHO is responsible for the governance of it. We believe the answer is that the Architect should be able to take this role.

Figure 2 – Scheme showing the impact of the technological Governance of the Project, Fondamenta

Today, we don’t want to talk about specific softwares and the use we make of them but rather point out the great opportunity that a specific use of technology could give Architects today. We were trained in a university founded on Vitruvian philosophy in which Architects must have a holistic approach to Architecture, being as much generalist as possible within the field of the discipline. Over time, we have witnessed a dismantling of the so-called “Generalist Architect”, in favour of over-specialisation in specific aspects of our discipline. The Architect has been relegated to a consultant, who concurs in order to create an architectural project. Instead, we believe the Architect must be the central figure, capable of managing the complexities of today’s world, through governance of many actors and aspects. This can only be possible, in our opinion, with the aid of technology. Our last resource is to believe a generalist Architect may still exist. . .

To achieve the latter, we use existing BIM (Building Information Modelling) technology to be superimposed with our customised system. For three years we have been testing a Vocabulary of codes and protocols that are applied to BIM and that become the common “language” inside the digital model that expresses the Architectural Project, which all involved actors have to learn and share. We, as Architects, are responsible for the governance of this centralised model and system, being the one creating the laws of the digitally-organised government. We didn’t start our practice directly with this idea, it was raised as a consequence of the first project we built and the impossibility we faced to have a central role in the process. Losing power and responsibility over the process with a negative impact on the projects was the consequence. We are still working on it daily to improve it, it is an ongoing process. If we have to depict with a diagram the shift between the approach we had at the beginning, and the approach we have now, this slide expresses it [indicates screen].

The centralised system we are looking for allows different actors to interact inside a given structure, with a given language crafted by us.

Figure 3 – FONDAMENTA BIM Alphabet, Fondamenta

To get more into details, the above charts depict specific aspects of our customised Mother model. The strength of BIM is that it enables all consultants who are involved in the process to implement and add their knowledge and information inside a common, single-instance digital Model. Codes and rules were developed so as to share and communicate between the different disciplines, which belong to different worlds. The most important layer to be translated is that of economy. Each aspect of the project relates to an economical parameter that controls the cost of the projects. Starting from an existing software, we added our customised logic and vocabulary.

What we are seeing throughout our practice is that we can have control of the project from the very start. For the most part, BIM generally arises after an execution plan is in place. Instead, we deal with these premises from day zero – from concept phase – this is what makes enormous difference. Following this scheme, all actors begin to communicate at the very start, at the right time, without finding themselves in the position of compromise, but rather putting on the table all the topics that, if worked out at the right time, can surely bring the project to more radical expressions. Hence, there are incredible possibilities to push the projects to their limit, being able to build without it being jeopardised during an uncontrolled process.

We will show three different projects of ours. The first one, our first built project, is a winery in Piemonte (2018-2020).

Figure 5 – Winery Cantina dei 5 Sogni, Extract from Casabella 921 @Marco Cappelletti 
Figure 7 – Winery Cantina dei 5 Sogni, Executive drawings for Steel formwork and concrete shells geometry, FONDAMENTA and Matteo Clerici 

In this project, our awareness of technology and its potential was limited and not yet evident. That is why we run this project without using BIM to solve design and governance issues. The winery project develops research on the pursuit of a seemingly impossible balance between different structural systems, which must coexist as one organism out of concrete and steel. We designed and optimised the shell system together with our engineer, making it work as structural truss to hold the concrete pitched roof while containing part of the program. The double steel formwork of the shells, poured in one single day without pause, was directly designed, drawn and sent to the manufacturer.

After this experience, we realised that we needed more technological support to be able to control the construction process in order to push forward more projects. Particularly dealing with aspects such as economics, time and money, but also sustainability of the process. This change of guard started with the series of projects we are building in Sicily, first among all 18018EH projects of houses near Noto. From this moment, we started governing the process with the aid of BIM – our instrument – from the beginning of conception.

Figure 8 – 18018EHSR Private House, External Rendering, DIMA 

This house  is mostly underground, with only 30% of its surface exposed above ground meterage. We are trying to develop a three-dimensional project where the space develops in three axes, and all the load-bearing walls are made of local stone. The structural floor plan is created through a system of radius and circumference. Through the use of softwares, we were able to optimise the construction lines, turning them from splines to radius, working in accordance with the technical consultants to develop the BIM model. This is a snapshot showing the massive amount of information inside this model.

This is interesting because implementing information in a model is not enough to control it, there needs to be instrumental rules in order to make an architecture real. This project will be soon delivered to a construction company. Costs, money and time are essential points in our profession, in order to have the possibilities to realise our research, design cannot transcend from them. We are connected and interested in the economy of the project, which sustains architecture processes through awareness in governance and allows us to control our design according to cost.

Figure 10 – 18018EHSR Private House, Axonometry showing construction aspect and codes, FONDAMENTA 

It was incredible how we managed to control the project and design through our tools. For example, we like to show all these axonometric drawings – each code, of course, remains connected, with a clear Excel chart that reminds us of cost, quantities, and all the details that a specific part of the model has. Figuring out a way of communicating the mass of information that we were implementing in the digital model was another interesting aspect. This is something that we’re still developing to make it even more readable for the involved actors. Of course, there are just a couple of Excel spreadsheets connected to these axonometries!

Figure 11 – 18018EHSR Private House, Axonometry showing stone walls geometry and codes, FONDAMENTA 

In terms of design, we see the potential in technology as something that allows us to further push our research related to space and structure. For example, here, all the other walls will be made out of stone, blocks of stone that are one metre long, 50 centimetres high, and 30 centimetres in depth. For Grasshopper, we customised each one to come out with a sort of “abacus” of all the walls with specifications and a numbering system, then, delivered to a construction company.

This technology enables us to build within a certain amount of time. If we reflect on past projects, time is something that we really cannot negotiate – it is the hardest variable to negotiate today. Technology gives us the ability to control time more than any other aspect. We love to go back to the models, because we think that this “ping-pong” between the digital tool and the making process gives us an awareness of reality. We don’t have to lose control of what we are thinking and designing.

Figure 12 – 20027F Private House Rennovation, Axonometry showing the project strategy, FONDAMENTA 

The last aspect that we are trying to show through this house – one of the projects already into construction since four months ago – is that we reached a certain level of governance of actors during the process from the beginning. This is a renovation, where we stripped out the existing building – the partition walls – but kept working with the existing concrete cage structure. We kept the load-bearing structure, made out of concrete, and we inserted a new steel structure, changing its form but keeping the volume untouched.

Wanting it to be a precise case study, we sat with our consultants and engineers from the very beginning. All the possible actors were involved from the embryonic phase and we designed together, trying to understand immediately all the potential realistic approaches that could be achieved.

Figure 13 – 20027F Private House Rennovation, Axonometry of the BIM Model, FONDAMENTA 
Figure 14 – 20027F Private House Rennovation, Rendering, DIMA 

I’ll just show a couple of snapshots of the model that we delivered to the construction company, pointing out that it is the same model we had from the beginning. From structures, H back, to installations, every element was designed with involved actors, long before the building process started on site.

It’s really important for us to underline that Architects have to be able to see and understand consultants and potential constraints as a possibility to further the design. For us, this was not something particularly easy to understand initially, because we were trained to see consultants and all other actors as part of architecture, and came in parallel to the project. Just like the scheme we showed, they are parallel lines that, at a certain point, intertwine. In that moment, you have a connection, and this connection has to be constant. Through this system we are developing, where each actor involved in the process has to be aware of the language we share in order to achieve the project.

This is just a snapshot of the house at the moment; we’ve stripped out the partition walls and it’s just the concrete.

To conclude, BIM has a deep social impact, giving back to architecture and architects the power they should have in the process. It is then up to us to create a social resistance and approaches to contemporary society.

Suggest a Tag for this Article
Open Seminar – Round Table
Open Seminar – Round Table
Open Seminar – Round Table Discussion
Algorithmic Form, Discussions & Conversations, Open Seminar, Round Table
alessandro bava, Provides Ng, Marco Vannucci, Philippe Morel, Roberto Bottazzi

thealessandrobava@gmail.com
Add to Issue
Read Article: 3239 Words

(This transcription has been edited) 

Presenters: Alessandro Bava (AB), Philippe Morel (PM), Marco Vannucci (MV), Roberto Bottazzi (RB), Provides Ng (PN).  

Venue: Zoom

Date: 08th December, 2021

AB:  

What I’m interested in, in this discussion – and we saw it in all the presentations – is not exclusively work that has been done with a computer per se or using proficiency in coding, but also how this can influence the practice of designing and making spaces. Going back to architecture; making spaces, constructing the human habitat. 

I think there are a number of strands we could pick up on, so I’m going to leave space for the speakers too, [but] I have a few questions and connections that I want to make. 

I think the video was amazing to end with, Philippe [Philippe Morel], because it also gave us a big platform to understand culturally how all these different things are laid out, because we really dwell in different timeframes – or timelines, one should say, it’s more fashionable today! 

I think there are these amazing overlaps and connections that allow us to expand on this, and I really want to stress our support for our guests. For the people listening, there is not so much work being done in the direction of understanding this cultural impact – I mean, Philippe mentioned quite a few moments and exhibitions that are in fact legendary, precisely because there are so few of them. 

So, there have been a few moments where of course the role of technology and computation has been understood in terms of its cultural implications. The other day, I was at another panel, another symposium, where we discussed algorithms and their impact on culture at large, and there was so much – in my view, being someone who does not consider themselves to be the most literate on the subject – I find that there is a lot of illiteracy that leads to a lot of paranoia, which actually doesn’t work. And this is something that I was surprised to find in Manfred Mohr, in the 60s, the idea that we need to push for literacy, because actually it is a tool that extends our ability, I think, especially for the purposes of architecture.  

Federico, speaking about the work of his studio today, really clarified that in a very direct and visible way; how we can use applications of computation within groups today, on the design side and the management side, and how these two things can be harmonised through technology. It’s an amazing development, and one that you know Manfred Mohr would be happy about, let’s say, as far as literacy on the subject goes. So, I’m very happy that today we are collectively contributing to this, adding to this history. 

I keep saying lately that we need new hermeneutic tools; tools for understanding computational design and computational tools and how they can be integratedinto established methodologies. How do we integrate new tools into existing methodologies? For example, in the work I did, I was really interested in seeing Moretti’s exhibition at the Triennale where he actually proposed a few buildings. Analysing that exhibition alone, we can see how certain parametric tools were used for specific typologies of buildings. Moretti could have applied this to anything, but he chose to apply it to a certain large-scale urban infrastructure, such as a sports arena, or a cinema – things that we understand as “large objects”. Large single objects that can respond to one main parameter. And actually, towards the end of your presentation, Marco, you said we “could not compute” – we need to understand the scale of algorithms and how far they can go, where they can be applied to architecture in a meaningful way and where perhaps not at all!.  

MV: 

I think, yes, in retrospect, Moretti focused on typologies that, if we fast-forward 50 years, are typically parametric now, they are more or less mono-functional. Nowadays, a stadium is no longer mono-functional, but it is actually designed [so that spectators are all] looking at the pitch, and therefore we developed it into the most parametric typology. I’m not sure how aware he was of that actually, also because I think at the time the stadium itself was a rather new typology, in a way. Sport, and the “massification” of sport, and so on.  

The other thing I want to say, regarding the discussion – and I’ll just throw it in there perhaps – is that we take it for granted, that for many, many years, computational design, especially from the early 90s, has never really confronted the past. As if it was developed in a vacuum, let’s say, as if it just came out of nowhere. Of course, this is understandable, because architects were all very excited; they wanted to kind of experiment and bring this new technology to fruition to start building. The economy at the time was better than today, so there were things that were converging, let’s say, but what I find particularly important is that at some point, eventually, it is actually, really necessary to go back and see that there is a legacy there. There is a tradition; which is a very normal, traditional architecture, as we know it, and it’s not just a bunch of punks that play with computers. In terms of the cultural relevance of the discussion. 

And then, of course, we can say that we have always been parametric, or that architecture itself is a discipline that is about the idea of establishing algorithmic procedure to get something built. 

AB: 

I think the knowledge that we should perhaps understand, and I think Manfred Mohr’s work really helps us with that, is that it’s perhaps just the idea of encoding certain processes that have always been part of architecture. Coding them, and then potentially automating them or doing something else with them, is what machines allow us to do, but that doesn’t necessarily change how we think about it; it’s not the end.  

I want to stress the fact of what you say about the importance of history, or how we are trying to reconnect – or rebuild bridges, if you like. For some parts of the discourse on digital computation, perhaps it’s as if history started in the Bell laboratories, or something like that? It started in the US with the beginning of mass computers and stuff like that. But I think, Roberto, of course, has done a lot of work on building bridges,  and making us understand that the bridges go a lot further back in time, in fact.  

RB: 

I keep thinking about what Philippe said a second ago, and why computational logic keeps going metaphysical, and I think it’s a side note, but I can’t stop myself, I have to say it! 

There are two ways to look at it, one is that you’re totally right Philippe, [Ramon] Llull is the point of reference in this conversation, and again, if we’re talking about bridges that were burned in history there’s definitely only a vague understanding of the importance of Llullism. How could it be that a person who invents concentric wheels, who wants to basically convince Muslims that their religion is inferior, has a lasting effect throughout Europe for over 300 years? I mean, it’s not even explainable as a joke! I would say this is perhaps interesting – because it is a computation project, there is no doubt about that – it’s very interesting because computation sits at a moment in history where other notations emerge for non-visual, or non-mimetic ways of articulation, articulating reality and knowledge. That was interesting for Philippe – but his is just the last presentation we saw and I tend to have a short term memory! 

It was also interesting, for instance, for Manfred Mohr, this constant tension between the visual and the conceptual – and I think that is one of the interesting premises of computation, historically, over a very long period of time. A system to articulate something that lies between the intelligible and the sensible. Something that cannot quite be sense, and yet needs to be very clear to the mind. This tension, the fact that computational logic always tends to be in that realm, is probably something that has to do with that. 

Obviously, you could also look at it a different way, you could say, well, computational logic is a simple mathematical process that could be grasped a lot earlier in history than other, more advanced, mathematical models; or you could also relate it to the fact that, for some reason, the Christian tradition forgot the first commandment, because we should not really be able to draw God. But we decided to ignore it, for reasons that are not entirely clear to me, and the kabbalistic tradition did not ignore it, the kabbalistic tradition is a notational system for symbolic articulation of the world without generated images. So, I think all I want to say is that the short comment that Philippe made in passing could be quite powerful. 

AB:  

I love that this took a theological dimension! I think it’s really crucial; this constant question on this idea of the visual and the conceptual, even in the work of Manfred Mohr – when you talked about this period when his work was purely code and, in fact, in the exhibition, there was a printing machine just printing whatever was coming out of the program. Then later on, in the 80s, with the development of the visual interfaces, his work became different – and in fact you connected it to the work of Peter Eisenman.  

So, it’s really a key question for me that today, of course, software is popularised, there is even visual computation, visual algorithms… this is possible through software such as Grasshopper. There are aids to an understanding of a visual means through code, let’s say, and I’m interested in this, because for a long time we have been discussing computation and architecture purely in terms of data – how do we get data, how do we structure data? But today, we’re in a different environment, where software is more developed and more accessible, and people don’t necessarily think about “what’s in the black box”; but nevertheless, what comes out for me, when I look at it again, I can only understand as computational. Even more so when it’s informed by the language and culture of the digital – by the culture of digital tools.  

I’m really curious to hear your position on this, whether you see where we are going in a sense? Is visual computation comparable to a purely algebraic or coded computation? Can we compare the two, can the two coexist? Philippe, I would love to hear your answer, but this question is extended to everyone. I think it touches everyone, pretty much.  

PM: 

I mean, first, just a very quick note on this metaphysical issue associated with combinatory rates. My feeling is that at that moment in time – you know, in the 13th Century, or 12th Century – it was a bit extraordinary to be able to demonstrate that only a few numbers or parameters could lead to so many possibilities. So, I think there are some magic tricks for the people who know nothing about mathematics, there’s some magic associated with combinatorics – at least at that time in history. Of course, today we look at that as something which is pretty simple; we are not surprised anymore by anything to do with combinatorics and we are probably more impressed by some other domains of mathematics that are more conceptual but, I would say even in the 20th century it was impressive, there was some magic to it. My feeling is that if it’s a bit associated with metaphysics, it’s also because there is some intrinsic magic in this combinatorial explosion at some point. It’s a very sketchy hypothesis! 

Regarding the question by Alessandro; no, I believe that visual programming is not like more standard programming where we use code and symbols. It creates the same effect, but I would say probably the intellectual operations are not exactly the same – also the feeling we have is not exactly the same because, in one case we do things – it’s a much more visual operation. When you do visual programming it’s a bit like putting some order in a PowerPoint presentation, you shift some slides until it’s made, but when you do programming by writing code I think it’s a slightly more analytical approach, or it’s more textural, more text based. 

AB:  

I agree. Then my question is to the end of making architecture – as of course I understand what you are saying, the two things are very different – but to the end of making architecture, toward the end of what is useful for architecture? Because if I look, for example, at someone like Federico, they use computational tools, but the input is very much like a curve that is drawn, and they use this data to then do different kinds of processes. That one curve can start influencing other curves that are drawn and things like that, but there is an input that is drawn. Whereas in a lot of computation, for the description of the visual design, there is always this question – even in the academic work at the Bartlett – of where does the data come from, and it’s almost like a theological question; it has to come from some God-given numerical formula. So I’m interested in this question, which, I think, is quite a central question, methodologically. 

PM:  

I would say, probably, we are entering an era in which the data is becoming more important than the algorithms. I don’t know if it’s true scientifically speaking, by the way, but at least the mindset is maybe in favour of a deeper influence of the data, over the influence of the algorithms, maybe – but again it’s definitely not a scientific statement. Probably because it’s much easier to associate the data to everything which is happening in society at large. 

For example, we know the data of Facebook, because we see them every day. Although we don’t see all of the data, we see how it works; but we don’t know the algorithms they are using. So, even if I believe that algorithmic science is more developed and more advanced than ever – it’s absolutely crazy the complexity of algorithmic science today – most people don’t have a grasp on that. So this is why, maybe, we can say that on an everyday basis the data seems more important in today’s society. 

AB:  

I agree with that. Perhaps it’s also because certain algorithmic blocks are more available. I can bring the example of my students last year: they would take existing machine-learning procedures, then completely change the data set to an architectural data set, for example on architectural typologies, and then they would tweak the machine learning “black box” to adjust the output to what they needed it to do. So, in a way, this is a different approach. I mean, scientifically it is not a purist approach to computation, but ultimately, at least what I’m interested in is, how can we use it, even if it’s about using blocks and bits, how can we then tweak them to be useful for us as designers? That is my point to you. 

Are there any more comments, or questions from the audience? We had a pretty amazing rate of people not dropping out.  

PN: 

Actually, when you were asking the question about visual computation versus algebraic code computation, I wasn’t exactly sure why it was asked us a question. Maybe it’s because it’s 1 am, but when you were asking, it actually reminded me of John Nash, the guy who got the Nobel Prize for game theory. When he was 25, before he developed mental illness, he was actually famous for the “embedding theorem”, looking at high dimensional objects and whether you can actually embed them in any Euclidean space. We usually visualize this sort of embedding like a donut, with a lot of waves flowing through the donut, but actually when they interviewed John Nash everything in his brain was numbers; he was never really a visual person.– He completely hated the movie A Beautiful Mind [a biopic of John Nash] because he didn’t see things [in the way it portrayed], like his schizophrenia was a miracle – I mean that’s crazy to a very banal brain like mine. 

I don’t really see the visual and the algebraic as either/or – and also, if you look at Chinese mathematics, as Philip also showed, the entire book of change, the I Ching with the hexagrams, was not visual. They literally document everything with Chinese characters – and it’s crazy when you have to read through that, because China is an agricultural nation, so we measure everything pragmatically. The mathematics is metaphysical, but we’re measuring the depth of the soil, how much rain we need, in the book of I Ching, and they would write down “12345” in those complex characters and people would still manage to do the geometrical calculation in their mind, which is crazy. 

When talking about Facebook data, there is always this privacy/ethical question that I agree is becoming theological and inescapable – but maybe it’s just because of the mindset that we feel like we’re always dependent on a centralised platform. We’re actually making a sort of trade, where we surrender the data because they’re doing a social service for us. A computational service that would be hard to do as an individual. So maybe the mindset is, as opposed to passively surrendering data, is there a way to actively contribute data so that we get over the data privacy problem? 

AB: 

I was thinking about how, for example, architecture data is scarce. When we did this research on technologies, it was really hard to find this data. Where do you go? You need to go into the old registries of each city to find the undigitised maps, and try to redraw them and things like that, so we also live in that reality.  

Also when you mentioned the abstraction versus visual idea, I was reminded of my dad, who in his career was a computer programmer, and how he always says that he sees the numbers and not the visual things, so for me this is slightly triggering on some levels!  

Anyway, any more comments or questions? 

PN: 

It’s actually like CAPTCHA, right, what they really do is that they don’t hire an intern to label a data set, but instead they create an economy by distributing the labeling tasks to users, match-making two problems – problems in training machine vision and in validating humans – [to create a solution].  

AB: 

Yeah, we’re waiting for a start-up to deal with the architectural algorithm!  

Provides Ng 

(Laughs) [Get people to label] doors and windows for BIM? 

AB: 

Exactly. That perhaps is a good implementation.  

All right, I’m thinking that I will close this amazing session here today, just because, again, we were meant to finish at five! 

I’m really grateful to all of you for your contributions, and again, today was a kind of amazing and stellar way to present the journal that will come out next year. So thank you so much for this discussion. It’s really precious, for me a lot of ideas were really fruitful in amplifying the conversation on computational design. As we have seen, augmenting the literacy and the discourse and the different threads on it, and even the historical grounding of this discourse, is fundamental.  

So, thank you so much. 

Suggest a Tag for this Article
Figure 5 Fun Palace in London before Demolition [61] 
Figure 5 Fun Palace in London before Demolition [61] 
Architectural Authorship in “the Last Mile”
Architectural Authorship, automation, digitalisation, Fun Palace, Leon Battista Alberti, mass-customisation, the Last Mile
Yixuan Chen

y.chen.20@alumni.ucl.ac.uk
Add to Issue
Read Article: 6617 Words

Introduction 

A loyal companion to the breakthroughs of artificial intelligence is the fear of losing jobs due to a robotic takeover of the labour market. Mary L. Gray and Siddharth Suri’s research on ghost work unveiled another possible future, where a “last mile” requiring human intervention would always exist in the journey towards automation. [1] The so-called “paradox of the last mile” has been exerting impacts on the human labour market across the industrial age, recurringly re-organising itself when absorbing marginalised groups into its territory. These groups range from child labourers in factories, to the “human computer” women of NASA, to on-demand workers from Amazon Mechanical Turk (MTurk). [2] Yet their strenuous efforts are often rendered invisible behind the ostensibly neutral algorithmic form of the automation process, creating “ghost work”. [3] 

Based on this concept of “the last mile”, this study intends to excavate how its paradox has influenced architectural authorship, especially during architecture’s encounters with digital revolutions. I will firstly contextualise “architectural authorship” and “the last mile” in previous studies. Then I will discuss the (dis)entanglements between “automation” and “digitalisation”. Following Antoine Picon and Nicholas Negroponte, I distinguish between the pre-information age, information age and post-information age before locating my arguments according to these three periods. Accordingly, I will study how Leon Battista Alberti, the Fun Palace, and mass-customised houses fail in the last mile of architectural digitalisation and how these failures affect architectural authorship. From these case studies, I challenge the dominant narrative of architectural authorship, either as divinity or total dissolution. In the end, I contend that it is imperative to conceive architectural authorship as relational and call for the involvement of multi-faceted agents in this post-information age. 

Academic Context 

Architectural Authorship in the Digital Age 

The emergence of architects’ authorial status can be dated back to Alberti’s De re aedificatoria, which states that “the author’s original intentions” should be sustained throughout construction. [4] Yet at the same time, those architects should keep a distance from the construction process. [5] It not only marks the shift from the artisanal authorship of craftsmen to the intellectual authorship of architects but also begets the divide between the authorship of architectural designs and architectural end products. [6] However, this tradition can be problematic in the digital age, when multi-layered authorship becomes feasible with the advent of mass-collaboration software and digital customisation technologies. [7] 

Based on this, Antoine Picon has argued that, despite attempts to include various actors by collaborative platforms such as BIM, architects have entered the Darwinian world of competition with engineers, constructors and existing monopolies, to maintain their prerogative authorship over the profession. [8] These challenges have brought about a shifting attention in the profession, from authorship as architects to ownership as entrepreneurs. [9] Yuan and Wang, on the other hand, call for a reconciliation of architectural authorship between regional traditions and technologies from a pragmatic perspective. [10] However, these accounts did not throw off the fetters of positioning architects as the centre of analysis. In the following article, I will introduce “the last mile”, a theory from the field of automation, to provide another perspective on the issues of architectural authorship. 

“The Last Mile” as Method 

The meaning of “the last mile” has changed several times throughout history. Metaphorically, it was used to indicate the distance between the status quo and the goal, in various fields, such as movies, legal negotiations, and presidential campaigns. [11] It was first introduced in the technology industry as “the last mile” of telecommunication, on which one of the earliest traceable records was written in the late 1980s. [12] Afterwards, “the last mile” of logistics began to be widely used in the early 2000s, following the dot-com boom of the late 90s that fuelled discussions of B2C eCommerce. [13] However, in this article, I will use “the last mile” of automation, a concept from the recent “AI revolution” since 2010, to reconsider architectural authorship. [14] In this context, “the last mile” of automation refers to “the gap between what a person can do and what a computer can do”, as Gray and Suri defined in their book. [15] 

I employ this theory to discuss architectural authorship for two purposes.  

1. Understanding the paradox of automation can be of assistance in understanding how architectural authorship changes along with technological advancements. Pasquinelli and Joler suggest that “automation is a myth”, because machines have never entirely operated by themselves without human assistance, and might never do so. [16] Subsequently, here rises the paradox that “the desire to eliminate human labour always generates new tasks for humans” and this shortcoming “stretched across the industrial era”. [17] Despite being confined within the architectural profession, architectural authorship is subject to change in parallel with the alterations of labour tasks. 

2. I contend that changes in denotations of “the last mile” signal turning points in both digital and architectural history. As Figure 1 suggests, in digital history, the implication of the last mile has changed from the transmission of data to the analysis of data, and then to automation based on data. The former change was in step with the arrival of the small-data environment in the 1990s and the latter corresponds with the leap towards the big-data environment around 2010. [18] In a similar fashion, after the increasing availability of personal computers after the 90s, the digital spline in architecture found formal expression and from around 2010 onwards, spirits of interactivity and mass-collaboration began to take their root in the design profession. [19] Therefore, revisiting the digital history of architecture from the angle of “the last mile” can not only provide alternative readings of architectural authorship in the past but can also be indicative of how the future might be influenced. 

Figure 1 Changes of Meanings for “the Last Mile” in Digital History, and Digital Turns in Architectural History. 

Between Automation and Digitalisation 

Before elucidating how architectural authorship was changed by the arrival of the automated/digital age, it is imperative to distinguish two concepts mentioned in the previous section – automation and digitalisation. To begin with, although automation first came to use in the automotive industry in 1936 to describe “the automatic handling of parts”, what this phrase alludes to has long been rooted in history. [20] As Ekbia and Nardi define, automation essentially relates to labour-saving mechanisms that reduce the human burden by transferring it to machines in labour-requiring tasks, including both manual and cognitive tasks. [21] Despite its use in human history, it was not until the emergence of digital computers after WWII that its meaning became widely applicable. [22] The notion of computerised automation was put forward by computer scientist Michael Dertouzos in 1979, highlighting its potential for tailoring products on demand. [23] With respect to cognitive tasks, artificial intelligence that mimics human thinking is employed to tackle functions concerning “data processing, decision making, and organizational management”. [24] 

Digitalisation, on the other hand, is a more recent concept engendered by the society of information in the late 19th century, according to Antoine Picon. [25] This period was later referred to as the Second Industrial Revolution, when mass-production was made possible by a series of innovations, including electrical power, automobiles, and the internal combustion engine. It triggered what Beniger called the “control revolution” – the volume of data exploded to the degree that it begot revolutions in information technology. [26] Crucial to this revolution was the invention of digital computing, which brought about a paradigm shift in the information society. [27] It has changed “the DNA of information” in the sense that, as Nicholas Negroponte suggests, “all media has become digital”, by converting information from atoms to bits. [28] In this sense, Negroponte distinguishes between the information age, which is based on economics of scale, and the post-information age, founded on personalisation. [29] 

It can be observed that automation and digitalisation are intertwined in multiple ways. Firstly, had there been no advancement in automation during the Second Industrial Revolution, there would be no need to develop information technology, as data would have remained at a manageable level. Secondly, the advent of digital computers has further intermingled these two concepts to the extent that, in numerous cases, for something to be automated, it needs first to be digitalised, and vice versa. In the architectural field alone, examples of this can be found in cybernetics in architecture and planning, digital fabrication, smart materials, and so on. Hence, although these two terms are fundamentally different – most obviously, automation is affiliated with the process of input and output, and digitalisation relates to information media – the following analysis serves with no intention to differentiate between the two. Instead, I discuss “the last mile” in the context of reciprocity between these two concepts. After all, architecture itself is at the convergence point between material objects and media technologies. [30] 

Leon Battista Alberti: Before the Information Age 

Digitalisation efforts made by architects, however, appeared to come earlier than such attempts made in industrial settings of the late 19th century. This spirit can be traced back to Alberti’s insistence on identicality during information transmission, by compressing two-dimensional and three-dimensional information into digits – which is exemplified by Descriptio Urbis Romae and De statua. [31] In terms of architecture, as mentioned previously, he positions built architecture as an exact copy of architects’ intention. [32] This stance might be influenced by his views on painting. First, he maintains that all arts, including architecture, are subordinate to paintings, where “the architraves, the capitals, the bases, the columns, the pediments, and all other similar ornaments” came from. [33] Second, in his accounts, “the point is a sign” that can be seen by eyes, the line is joined by points, and the surface by lines. [34] As a result, the link between signs and architecture is established through paintings since architecture is derived from paintings and paintings from points/signs.  

Furthermore, architecture can also be built according to the given signs. In Alberti’s words, “the whole art of buildings consists in the design (lineamenti), and in the structure”, and by lineamenti, he means the ability of architects to find “proper places, determinate numbers, just proportion and beautiful order” for their constructions. [35] It can be assumed that, if buildings are to be identical to their design, then, to begin with, there must be “determinate numbers” to convey architects’ visions by digital means – such as De statua (Fig. 2). Also, in translating the design into buildings, these numbers and proportions should be unbothered by any distortions as they are placed in actual places – places studied and measured by digital means, just like Descriptio Urbis Romae (Fig. 2). 

Although the Albertian design process reflects the spirit of the mechanical age, insisting on the identicality of production, it can be argued that his pursuit of precise copying was also influenced by his pre-modern digital inventions being used to manage data. [36] Therefore, what signs/points mean to architecture for Alberti can be compared to what bits mean to information for Negroponte, as the latter is composed of the former and can be retrieved from the former. Ideally, this translation process can be achieved by means of digitalisation. 

Figure 2 Descriptio Urbis Romae (Left) and De statua (Right)37 

Yet it is obvious that the last mile for Alberti is vastly longer than that for Negroponte. As Giorgio Vasari noted in the case of Servite Church of the Annunziata, while Alberti’s drawings and models were employed for the construction of the rotunda, the result turned out to be unsatisfactory, and the arches of nine chapels are falling backwards from the tribune due to construction difficulties. [38] Also, in the loggia of the Via della Vigna Nuova, his initial plan to build semi-circular vaults was aborted because of the inability to fulfil this shape on-site. [39] These two cases suggest that the allographic design process – employing precise measurements and construction – which heralded the modern digital modelling software and 3D-printing technologies, was deeply problematic in Alberti’s time. 

This problem was recognised by Alberti himself in his De re aedificatoria, when he wrote that to be “a wise man”, one cannot stop in the middle or at the end of one’s work and say, “I wish that were otherwise”. [40] In Alberti’s opinion, this problem can be offset by making “real models of wood and other substances”, as well as by following his instruction to “examine and compute the particulars and sum of your future expense, the size, height, thickness, number”, and so on. [41] While models can be completed without being exactly precise, architectural drawings should achieve the exactness measured “by the real compartments founded upon reason”. [42] According to these descriptions, the design process conceived by Alberti can be summarised as Figure 3. 

Figure 3 Albertian Design Process 

If, as previously discussed, architecture and its context can be viewed as an assembly of points and signs, the Albertian design process can be compared to how these data are collected, analysed and judged until the process reaches the “good to print” point – the point when architects exit and construction begins. Nonetheless, what Vasari has unveiled is that the collection, analysis and execution of data can fail due to technological constraints, and this failure impedes architects from making a sensible judgement. Here, the so-called “technological constraints” are what I consider to be “the last mile” that can be found across the Albertian design process. As Vasari added, many of these technological limitations at that time were surmounted with the assistance of Salvestro Fancelli, who realised Alberti’s models and drawings, and a Florentine named Luca, who was responsible for the construction process. [43] Regardless of these efforts, Alberti remarked that only people involved in intellectual activities – especially mathematics and paintings – are architects; the opposite of craftsmen. [44] Subsequently, the challenges of confronting “the last mile” are removed from architects’ responsibilities through this ostensibly neutral design process, narrowing the scope of who is eligible to be called an architect. The marginalisation of artisanal activities, either those of model makers, draughtsmen or craftsmen, is consistent with attributing the laborious last mile of data collection, analysis and execution – measuring, model making, constructing – exclusively to their domain. 

While the division of labour is necessary for architecture, as John Ruskin argued, it would be “degraded and dishonourable” if manual work were less valued than intellectual work. [45] For this reason, Ruskin praised Gothic architecture with respect to the freedom granted to craftsmen to execute their own talents. [46] Such freedom, however, can be expected if the last mile is narrowed to the extent that, through digitalisation/automation, people can be at the same time both architects and craftsmen. Or can it? 

Fun Palace: At the Turn of the Information and Post-Information Age 

Whilst the Albertian allographic mode of designing architecture has exerted a profound impact on architectural discipline due to subsequent changes to the ways architects have been trained, from the site to the academy, this ambition of separating design from buildings was not fulfilled, or even agreed upon among architects, in the second half of the 20th century. [47] Besides, the information age on the basis of scale had limited influences on architectural history, except for bringing about a new functional area – the control room. [48] Architecture’s initial encounters with the digital revolution after Alberti’s pre-modern technologies can be traced back to the 1960s, when architects envisaged futuristic cybernetic-oriented environments. [49] Different from Alberti’s emphasis on the identicality of information – the information per se – this time, the digitalisation and information in architecture convey a rather different message. 

Gorden Pask defined cybernetics as “the field concerned with information flows in all media, including biological, mechanical, and even cosmological systems”. [50] By emphasising the flow of data – rather than the information per se – cybernetics distinguishes itself in two aspects. Firstly, it is characterised by attempts of reterritorialization – it breaks down the boundaries between biological organisms and machines, between observers and systems, and between observers, systems and their environments, during its different development phases – which are categorised respectively as first-order cybernetics (1943-1960), second-order cybernetics (1960-1985) and third-order cybernetics (1985-1996). [51]  

Secondly, while data and information became secondary to their flow, catalysed by technologies and mixed realities, cybernetics is also typified by the construction of frameworks. [52] The so-called framework was initially perceived as a classifying system for all machines, and later, after computers were made more widely available and powerful, it began to be recognised as the computational process. [53] This thinking also leads to Stephen Wolfram’s assertion that the physical reality of the whole universe is generated by the computational process and is itself a computational process. [54] This is where the fundamental difference is between the Albertian paradigm and cybernetics, as the former is based on mathematical equations and the latter attempts to understand the world as a framework/computation. [55] Briefly, in cybernetics theory, information per se is subordinate to the flow of information and this flow can again be subsumed into the framework, which is later known as computational processes (Fig. 4). 

Figure 4 Information in Cybernetics Theory 

In Cedric Price’s Fun Palace, this hierarchical order resulted in what Isozaki described as “erasing architecture into system” after its partial completion (Fig. 5). [56] Such an erasure of architecture was rooted in the conceptual process, since the cybernetics expert in charge of the Fun Palace was Gordon Pask, who founded his theory and practice on second-order cybernetics. [57] Especially so, as considering that one major feature of second-order cybernetics is what Maturana and Varela termed “allopoiesis” – a process of producing something other than the system’s original component – it is understandable that if the system is architecture, then it would generate something different than architecture. [58] In the case of the Fun Palace, it was presupposed that architecture is capable of generating social activities, and that architects can become social controllers. [59] More importantly, Cedric Price rejected all that is “designed” and instead only made sketches of indistinct elements, diagrams of forces, and functional programs, rather than architectural details. [60] All these ideas, highlighting the potential in regarding architecture as the framework of computing – in contrast to seeing architecture as information – rendered the system more pronounced and set architecture aside. 

Figure 5 Fun Palace in London before Demolition61 

By rejecting architecture as pre-designed, Price and Littlewood strived to problematize the conventional paradigm of architectural authorship. They highlighted that the first and foremost quality of the space should be its informality, and that “with informality goes flexibility”. [62] This envisages user participation by rebuking fixed interventions by architects such as permanent structures or anchored teak benches. [63] In this regard, flexibility is no longer positioned as a trait of buildings but that of use, by encouraging users to appropriate the space. [64] As a result, it delineates a scenario of “the death of the author” in which buildings are no longer viewed as objects by architects, but as bodily experiences by users – architectural authorship is shared between architects and users. [65] 

However, it would be questionable to claim the anonymity of architectural authorship – anonymous in the sense of “the death of the author” – based on an insignificant traditional architectural presence in this project, as Isozaki did. [66] To begin with, Isozaki himself has remarked that in its initial design, the Fun Palace would have been “bulky”, “heavy”, and “lacking in freedom”, indicating the deficiency of transportation and construction technologies at that time. [67] Apart from the last mile to construction, as Reyner Banham explained, if the Fun Palace’s vision of mass-participation is to be accomplished, three premises must be set – skilful technicians, computer technologies that ensure interactive experiences and programmable operations, and a secure source of electricity connecting to the state grid. [68] While the last two concerns are related to technological and infrastructural constraints, the need for technicians suggests that, despite its claim, this project is not a fully automated one. The necessary involvement of human factors to assist this supposedly automated machine can be further confirmed in Price and Littlewood’s accounts that “the movement of staff, piped services and escape routes” would be contained within “stanchions of the superstructure”. [69] Consequently, if architects can extend their authorship by translating elements of indeterminacy into architectural flexibility, and users can be involved by experiencing and appropriating the space, it would be problematic to leave the authorship of these technicians unacknowledged and confine them within service pipes. [70] 

The authorship of the Fun Palace is further complicated when the content of its program is scrutinized. Price and Littlewood envisaged that people’s activities would feed into the system, and that decisions would be made according to this information. [71] During this feed-in and feedback process, human activities would be quantified and registered in a flow chart (Fig. 6). [72] However, the hand-written proposed list of activities in Figure 6 shows that human engagement is inseparable from the ostensibly automated flow chart. The arrows and lines mask human labours that are essential for observing, recognising, and classifying human activities. These tasks are the last mile of machine learning, which still requires heavy human participation even in the early 21st century. 

For instance, when, in 2007, the artificial intelligence project ImageNet was developed to recognise and identify the main object in pictures, developers found it impossible to increase the system’s accuracy by developing AI alone (and only assisting it when it failed). [73] Finally, they improved the accuracy of ImageNet’s algorithms by finding a “gold standard” of labelling the object – not from the developments of AI itself, but by using 49,000 on-demand workers from the online outsourcing platform MTurk to perform the labelling process. [74] This example suggests that if the automation promised by the Fun Palace is to be achieved, it is likely to require more than just the involvement of architects, users, and technicians. In the time of the Fun Palace’s original conception, the attempt was not fulfilled due to the impotence of computing technologies. Yet if such an attempt was to be made in the 2020s, it is likely that architectural authorship would be shared among architects, users, technicians, and ghost workers from platforms such as MTurk. 

Figure 6 Cybernetic Diagram (Left) and Proposed Activities (Right)75 

Returning to the topic of cybernetics, whilst cybernetic theories tend to redefine territories of the architectural system by including what was previously the other parts of the system – machines, observers, adaptive environments – the example of the Fun Palace has shown that this process of blurring boundaries would not be possible without human assistance, at least initially. The flow of information between these spheres would require human interventions to make this process feasible and comprehensible because, in essence, “the information source of machine learning (whatever its name: input data, training data or just data) is always a representation of human skills, activities and behaviours, social production at large”. [76] 

Houses of Mass-Customisation: In the Post-information Age 

Although cybernetics theories have metaphorically or practically influenced architectural discourse in multiple ways, from Metabolism and Archigram to Negroponte and Cedric Price, such impact was diminished after the 1970s, in parallel with the near-total banishment of cybernetics as an independent discipline in the in the academia. [77] After a long hibernation during “the winter of artificial intelligence”, architecture’s next encounter with digital revolutions happened in the 1990s. [78] It was triggered by the increasing popularity and affordability of personal computers – contrary to the expectations of cybernetics engineers, who back in the 1960s dreamt that computers would increase both in power and size. [79] These distinctive material conditions led to the underlying difference between the second-order cybernetics in the 1960s and architecture’s first digital turn in the 1990s. I contend that this distinction can be explained by comparing Turing’s universal machine with Deleuze’s notion of the “objectile”. 

As Stanley Mathews argued, the Fun Palace works in the same way as the universal machine. [80] The latter is a precursor of modern electronic computers, which can function as different devices – either as typewriters, drawing boards, or other machines – according to different codes they receive (Fig. 7). [81] Comparatively, “objectile” connotes a situation in which a series of variant objects is produced based on their shared algorithms (Fig. 8). [82] These products are so-called “non-standard series” whose key definition relates to their variance rather than form.83  

Figure 7 Simplified Diagram of the Universal Machine 
Figure 8 Non-standard Production 

While the universal machine seems to require more power to support its every change, an infinite one-dimensional tape on which its programmers can mark symbols of any instructions to claim its universality, non-standard production can operate on a smaller scale and under less demanding environments. [84] The emphasis on variance in non-standard production processes also indicates a shift of attention from the “process” underscored by second-order cybernetics towards the product of certain parametric models. When the latter is applied to architecture, the physical building regains its significance as the variable product. 

However, it does not mean a total cut-off between cybernetics and non-standard production. Since human-machine interactions are crucial for customising according to users’ input, I maintain that mass-customisation reconnects architecture with first-order cybernetics whilst resisting the notion of chaos and complexity intrinsic in second-order cybernetics.  

Figure 9 Flatwriter85 

Such correlation can be justified by comparing two examples. First, the visionary project Flatwriter (1967) by the Hungarian architect Yona Friedman proposed a scenario in which users can choose their preferred apartment plan from several patterns of spatial configurations, locations, and orientations. [86] Based on their preferences, they would receive optimised feedback from the system (Fig. 9). [87] This optimisation process would consider issues concerning access to the building, comfortable environments, lighting, communication, and so on. [88] Given that it rejects chaos and uncertainty by adjusting users’ selections for certain patterns of order and layout, this user-computer interaction system is essentially an application of first-order cybernetics, as Yiannoudes argued. [89] Contemporary open-source architectural platforms are based on the same logic. As the founder of WikiHouse argued, since the target group of mass-customisation is the 99 per cent who are constantly overlooked by the normative production of buildings after the retreat of state intervention, designing “normal” environments for them is the primary concern – transgression and disorder should be set aside. [90] As Figure 10 illustrates, similarly to Flatwriter, in theory, WikiHouse would pre-set design rules and offer design proposals according to calculations of the parametric model. [91] These rules would follow a “LEGO-like system”, which produces designs by arranging and composing standard types or systems. [92] Both Flatwriter’s optimisation and WikiHouse’s “LEGO-like system” are pursuing design in accordance with patterns, and discouraging chaotic results. 

Figure 10 Designing Process for a WikiHouse [93

Nevertheless, neither Flatwriter nor WikiHouse has achieved what is supposed to be an automatic process of using parametric models to generate a variety of designs. For Flatwriter, the last mile of automation could be ascribed to the unavailability of computers capable of performing calculations or processing images. For WikiHouse, the project has not yet fulfilled its promise of developing algorithms for design rules that resemble how the “LEGO blocks” are organised. Specifically, in the current stage, plans, components and structures of WikiHouse are designed in SketchUp by hand. [94] The flexibility granted to users is achieved by grouping plywood lumber into components and allowing users to duplicate them (Fig. 11). Admittedly, if users are proficient in Sketchup, they could possibly customise their WikiHouse on demand – but that would then go against the promise of democratising buildings through open-source platforms. [95]  

Figure 11 SketchUp Models of WikiHouse96 

Consequently, the last mile of automation again causes a conundrum of architectural authorship. Firstly, in both cases, never mind “the death of the author”, it appears that there is no author to be identified. One can argue that it signals a democratic spirit, anonymising the once Howard Roark-style architects and substituting them with a “creative common”. Nonetheless, it must be cautioned that such substitution takes time, and during this time, architects are obliged to be involved when automation fails. To democratise buildings is not to end architects’ authorship over architecture, but conceivably, for a long time, to be what Ratti and Claudel called “choral architects”, who are at the intersection of top-down and bottom-up, orchestrating the transition from the information age of scale to the post-information age of collaboration and interactivity. [97] Although projects with similar intentions of generating design and customising housing through parametric models – such as Intelligent City and Nabr – may prove to be more mature in their algorithmic process, architects are still required to coordinate across extensive sectors – clients’ inputs, design automation, prefabrication, logistics, and construction. [98] Architectural authorship in this sense is not definitive but relational, carrying multitudes of meanings and involving multiplicities of agents. [99]  

In addition, it would be inaccurate to claim architectural authorship by the user, even though these projects all prioritise users’ opinions in the design process. By hailing first-order cybernetics while rejecting the second-order, advocating order while disapproving disorder, they risk the erasure of architectural authorship – just as those who play with LEGO do not have authorship over the brand, to extend the metaphor of the “LEGO-like system” in WikiHouse. Especially as the digital turn in terms of technology does not guarantee a cognitive turn in terms of thinking. [100] Assuming that the capitalist characteristics of production will not change, technological advancements are likely to be appropriated by corporate and state power, either by means of monopoly or censorship.  

Figure 12 Non-standard Production After Repositioning Users 

This erasure of human agency should be further elucidated in relation to the suppression of chaos in these systems. As Robin Evans explained, there are two types of methods to address chaos: (1) preventing humans from making chaos by organising humans; and (2) limiting the effects of chaotic environments by organising the system. [101] While Flatwriter and WikiHouse choose to conform according to the former at the expense of diminishing human agency, it is necessary to reinvite observers and chaos as an integral part of the system towards mass-customisation and mass-collaboration (Fig. 12). 

Conclusion 

For Walter Benjamin, “the angel of history” moves into the future with its face turned towards the past, where wreckages were piled upon wreckages. [102] For me, addressing the paradox of “the last mile” in the history of architectural digitalisation is this backward gaze that can possibly provide a different angle to look into the future.  

This article mainly discussed three moments in architectural history when technology failed to live up to the expectation of full automation/digitalisation. Such failure is where “the last mile” lies. I employ “the last mile” as a perspective to scrutinize architectural authorship in these moments of digital revolutions. Before the information age, the Albertian notational system can be regarded as one of the earliest attempts to digitalise architecture. Alberti’s insistence on the identical copying between designers’ drawings and buildings resulted in the divide between architects as intellectuals and artisans as labourers. However, this allographic mode of architectural authorship was not widely accepted even into the late 20th century.  

At the turn of the information age and post-information age, Cedric Price’s Fun Palace was another attempt made by architects to respond to the digital revolution in the post-war era. It was influenced by second-order cybernetics theories that focused on the flow of information and the computational process. Buildings were deemed only as a catalyst, and architectural authorship was shared between architects and users. Yet by examining how the Fun Palace failed in the last mile, I put forward the idea that this authorship should also be attributed to technicians and ghost workers assisting the computation processes behind the stage. 

Finally, I analysed two case studies of open-source architectural platforms established for mass-customisation. By comparing Flatwriter of the cybernetics era and WikiHouse of the post-information age, I cautioned that both systems degrade architectural authorship into emptiness, by excluding users and discouraging acts of chaos. Also, by studying how these systems fail in the last mile, I position architects as “choral architects” who mediate between the information and post-information age. Subsequently, architectural authorship in the age of mass-customisation and mass-collaboration should be regarded as relational, involving actors from multiple positions. 

References

  1. Mary L. Gray and Siddharth Suri, Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass (New York: Houghton Mifflin Harcourt Publishing Company, 2019).
  2. Gray and Suri.
  3. Gray and Suri.
  4. Mario Carpo, The Alphabet and the Algorithm (London: The MIT Press, 2011), p. 22.
  5. Carpo, The Alphabet and the Algorithm, p. 22.
  6. Carpo, The Alphabet and the Algorithm, pp. 22–23.
  7. Mario Carpo, The Second Digital Turn: Design Beyond Intelligence (Cambridge, MA: The MIT Press, 2017), pp. 131, 140.
  8. Antoine Picon, ‘From Authorship to Ownership’, Architectural Design, 86.5 (2016), pp. 39–40.
  9. Picon, ‘From Authorship to Ownership’, pp. 39 & 41.
  10. Philip F. Yuan and Xiang Wang, ‘From Theory to Praxis: Digital Tools and the New Architectural Authorship’, Architectural Design, 88.6 (2018), 94–101 (p. 101) <https://doi.org/10.1002/ad.2371>.
  11. ‘“The Last Mile” An Exciting Play’, New Leader with Which Is Combined the American Appeal, 10.18 (1930), 6; Benjamin B Ferencz, ‘Defining Aggression–The Last Mile’, Columbia Journal of Transnational Law, 12.3 (1973), 430–63; John Osborne, ‘The Last Mile’, The New Republic (Pre-1988) (Washington, 1980), 8–9.
  12. Donald F Burnside, ‘Last-Mile Communications Alternatives’, Networking Management, 1 April 1988, 57-.
  13. Mikko Punakivi, Hannu Yrjölä, and Jan Holmström, ‘Solving the Last Mile Issue: Reception Box or Delivery Box?’, International Journal of Physical Distribution and Logistics Management, 31.6 (2001), 427–39 <https://doi.org/10.1108/09600030110399423>.
  14. Gray and Suri, p. 12.
  15. Gray and Suri, p. 12.
  16. Matteo Pasquinelli and Vladan Joler, ‘The Nooscope Manifested: AI as Instrument of Knowledge Extractivism’, 2020, pp. 1–23 (p. 19) <https://doi.org/10.1007/s00146-020-01097-6>.
  17. Gray and Suri, pp. 12 & 71.
  18. Carpo, The Second Digital Turn: Design Beyond Intelligence, pp. 9, 18 & 68.
  19. Carpo, The Second Digital Turn: Design Beyond Intelligence, pp. 5, 18 & 68.
  20. James Beniger, The Control Revolution: Technological and Economic Origins of the Information Society (London: Harvard University Press, 1986), p. 295.
  21. Hamid R. Ekbia and Bonnie Nardi, Heteromation, and Other Stories of Computing and Capitalism (Cambridge, Massachusetts: The MIT Press, 2017), p. 25.
  22. [1] Ekbia and Nardi, pp. 25-6.
  23. [1] Michael L. Dertouzos, ‘Individualized Automation’, in The Computer Age: A Twenty-Year View, ed. by Michael L. Dertouzos and Joel Moses, 4th edn (Cambridge, Massachusetts: The MIT Press, 1983), p. 52.
  24. Ekbia and Nardi, p. 26.
  25. Antoine Picon, Digital Culture in Architecture : An Introduction for the Design Professions (Basel: Birkhäuser, 2010), p. 16.
  26. Beniger, p. 433.
  27. Picon, Digital Culture in Architecture : An Introduction for the Design Professions, pp. 24–26.
  28. Nicholas Negroponte, Being Digital (New York: Vintage Books, 1995), pp. 11 & 16.
  29. Negroponte, pp. 163–64.
  30. Carpo, The Alphabet and the Algorithm, p. 12.
  31. Carpo, The Alphabet and the Algorithm, pp. 54–55.
  32. Carpo, The Alphabet and the Algorithm, p. 26.
  33. Leon Battista Alberti, On Painting, trans. by Rocco SiniSgalli (Cambridge: Cambridge University Press, 2011), p. 45.
  34. Alberti, On Painting, p. 23.
  35. Leon Battista Alberti, The Ten Books of Architecture (Toronto: Dover Publications, Inc, 1986), p. 1.
  36. Carpo, The Alphabet and the Algorithm, p. 27.
  37. ‘Architectural Intentions from Vitruvius to the Renaissance’ [online] <https://f12arch531project.fil es.wordpress.com/2012/10/xproulx-4.jpg>; ‘Alberti’s Diffinitore’ http://www.thesculptorsfuneral.com /episode-04-alberti-and-de-statua/7zf3hfxtgyps12r9igveuqa788ptgj [accessed 23 April 2021].
  38. Giorgio Vasari, The Lives of the Artists, trans. by Julia Conaway & Peter Bondanella (Oxford: Oxford University Press, 1998), p. 182.
  39. Vasari, p. 181.
  40. Alberti, The Ten Books of Architecture, p. 22.
  41. Alberti, The Ten Books of Architecture, p. 22.
  42. Alberti, The Ten Books of Architecture, p. 22.
  43. Vasari, p. 183.
  44. Mary Hollingsworth, ‘The Architect in Fifteenth-Century Florence’, Art History, 7.4 (1984), 385–410 (p. 396).
  45. Adrian Forty, Words and Buildings: A Vocabulary of Modern Architecture (New York: Thames & Hudson, 2000), p. 138.
  46. Forty, p. 138.
  47. Forty, p. 137; Carpo, The Alphabet and the Algorithm, p. 78.
  48. Picon, Digital Culture in Architecture : An Introduction for the Design Professions, p. 20.
  49. Mario Carpo, ‘Myth of the Digital’, Gta Papers, 2019, 1–16 (p. 3).
  50. N. Katherine Hayles, ‘Cybernetics’, in Critical Terms for Media Stuies, ed. by W.J.T. Mitchell and Mark B.N. Hansen (Chicago and London: The University of Chicago Press, 2010), p. 145.
  51. Hayles, p. 149.
  52. Hayles, pp. 149–50.
  53. Socrates Yiannoudes, Architecture and Adaptation: From Cybernetics to Tangible Computing (New York and London: Taylor & Francis, 2016), p. 11; Hayles, p. 150.
  54. Hayles, p. 150.
  55. Stephen Wolfram, A New Kind of Science (Champaign: Wolfram Media, Inc., 2002), pp. 1, 5 & 14.
  56. Arata Isozaki, ‘Erasing Architecture into the System’, in Re: CP, ed. by Cedric Price and Hans-Ulrich Obrist (Basel: Birkhäuser, 2003), pp. 25–47 (p. 35).
  57. Yiannoudes, p. 29.
  58. Yiannoudes, p. 14.
  59. Stanley Mathews, ‘The Fun Palace as Virtual Architecture: Cedric Price and the Practices of Indeterminacy’, Journal of Architectural Education, 59.3 (2006), 39–48 (p. 43); Yiannoudes, p. 26.
  60. Isozaki, p. 34; Yiannoudes, p. 50.
  61. Stanley Mathews, p. 47.
  62. Cedric Price and Joan Littlewood, ‘The Fun Palace’, The Drama Review, 12.3 (1968), 127–34 (p. 130).
  63. Price and Littlewood, p. 130.
  64. Forty, p. 148.
  65. Jonathan Hill, Actions of Architecture (London: Routledge, 2003), pp. 68–69.
  66. Isozaki, p. 34.
  67. Isozaki, p. 35.
  68. Reyner Banham, Megastructure: Urban Futures of the Recent Past (London: Thames and Hudson, 1972).
  69. Price and Littlewood, p. 133.
  70. Forty, pp. 142-8.
  71. Yiannoudes, p. 29.
  72. Yiannoudes, p. 31.
  73. Gray and Suri, pp. 33–34.
  74. Gray and Suri, p. 34.
  75. Cedric Price, Fun Palace Project (1961-1985), <https://www.cca.qc.ca/en/archives/380477/cedric-price-fonds/396839/projects/399301/fun-palace-project#fa-obj-309847> [accessed 25 April 2021].
  76. Pasquinelli and Joler, p. 19.
  77. Yiannoudes, p. 18; Carpo, ‘Myth of the Digital’, p. 11; Hayles, p. 145.
  78. Mario Carpo, ‘Myth of the Digital’, pp. 11–13.
  79. Carpo, ‘Myth of the Digital’, p. 13.
  80. Mathews, p. 42.
  81. Yiannoudes, p. 33.
  82. Carpo, The Alphabet and the Algorithm, p. 99.
  83. Carpo, The Alphabet and the Algorithm, p. 99.
  84. Yiannoudes, p. 50.
  85. Yiannoudes, p. 30.
  86. Yiannoudes, p. 30.
  87. Yiannoudes, p. 30.
  88. Yiannoudes, p. 31.
  89. Yiannoudes, p. 31.
  90. Alastair Parvin, ‘Architecture (and the Other 99%): Open-Source Architecture and the Design Commons’, Architectural Design: The Architecture of Transgression, 226, 2013, 90–95 (p. 95).
  91. Open Systems Lab, ‘The DfMA Housing Manual’, 2019 <https://docs.google.com/document/d/1OiLXP7QJ2h4wMbdmypQByAi_fso7zWjLSdg8Lf4KvaY/edit#> [accessed 25 April 2021].
  92. Open Systems Lab.
  93. Open Systems Lab.
  94. Carlo Ratti and Matthew Claudel, ‘Open Source Gets Physical: How Digital Collaboration Technologies Became Tangible’, in Open Source Architecture (London: Thames and Hudson, 2015).
  95. Parvin.
  96. ‘An Introduction to WikiHouse Modelling’, dir. by James Hardiman, online film recording, YouTube, 5 June 2014, <https://www.youtube.com/watch?v=qB4rfM6krLc> [accessed 25 April 2021].
  97. Carlo Ratti and Matthew Claudel, ‘Building Harmonies: Toward a Choral Architect’, in Open Source Architecture (London: Thames and Hudson, 2015).
  98. Oliver David Krieg and Oliver Lang, ‘The Future of Wood: Parametric Building Platforms’, Wood Design & Building, 88 (2021), 41–44 (p. 44).
  99. Ratti and Claudel, ‘Building Harmonies: Toward a Choral Architect’.
  100. Carpo, The Second Digital Turn: Design Beyond Intelligence, p. 162.
  101. Robin Evans, ‘Towards “Anarchitecture”’, in Translations From Drawings to Building and Other Essays (从绘图到建筑物的翻译及其他文章), trans. by Liu Dongyang (Beijing: China Architecture & Building Press, 2018), p. 20.
  102. Walter Benjamin, Illuminations: Essays and Reflections (New York: Schocken Books, 2007), p. 12.

Suggest a Tag for this Article
Figure 8 - Extraction process – on the left the digital model, and on the right the sequence of instructions resulting from the extraction process. 
Figure 8 – Extraction process – on the left the digital model, and on the right the sequence of instructions resulting from the extraction process. 
Algorithmic Representation Space
Algorithmic Abstractness, Algorithmic Design, Algorithmic Representation Space, Design Paradigms, Model Concreteness, Representation Method, Representation Space
Renata Alves Castelo Branco, Inês Caetano, António Leitão

renata.castelo.branco@tecnico.ulisboa.pt
Add to Issue
Read Article: 5587 Words

Introduction 

Architecture has always explored the latest technological advances, causing changes in the way architects represent and conceive design solutions. Over the past decades, these changes were mostly due to, first, the integration of new digital design tools, such as Computer-Aided Design (CAD) and Building Information Modelling (BIM), which allowed the automation of paper-based design processes [1], and then, the adoption of computational design approaches, such as Algorithmic Design (AD), causing a more accentuated paradigm shift within the architectural practice. 

AD is a design approach based on algorithms that has been gaining prominence in both architectural practice and theory [2,3] due to its greater design freedom and ability to automate repetitive design tasks, while facilitating design changes and the search for improved solutions. Its multiple advantages have therefore motivated a new generation of architects to increasingly adopt the programming environments behind their typical modelling tools, going “beyond the mouse, transcending the factory-set limitations of current 3D software” [3; p. 203]. Unfortunately, its algorithmic nature makes this approach highly abstract, deviating from the visual nature of human thinking, which is more attracted to graphical and concrete representations than to alphanumerical ones.  

To approximate AD to the means of representation architects typically use and thereby make the most of its added value for the practice, we need to lower the existing comprehension barriers, which hinder its widespread adoption in the field. To that end, this research proposes a new approach to the representation of AD descriptions – the Algorithmic Representation Space (ARS) – that encompasses, in addition to the algorithm, its concrete outputs and the mechanisms that contribute to its understanding. 

Algorithmic Representation Method and Design Paradigms

Despite the cutting-edge aura surrounding it, AD is a natural consequence of architects’ desire to automate modelling tasks. In this approach, the architect develops algorithms whose execution creates the digital design model [4] instead of manually modelling it using a digital design tool. Compared to traditional digital modelling processes, AD is advantageous in terms of precision, flexibility, automation, and ease of change, allowing architects to explore wider design spaces easily and quickly. Two AD paradigms currently predominate, the main difference between them lying in the way algorithms are represented: architects develop their algorithms either textually, according to the rules of a programming language, or visually, by selecting and connecting graphical entities in the form of graphs [5]. In either case, the abstract nature of the medium hinders its comprehension. 

Algorithms are everywhere and are a fundamental part of current technology. In fact, digital design tools have long supported AD, integrating programming environments of their own to allow users to automate design tasks and deal with more complex, unconventional design problems. Unfortunately, despite its advantages and potential to overcome traditional design possibilities, AD was slow to gain ground in the field, remaining, after almost sixty years, a niche approach. One of the main reasons is the fact that it requires architects to learn programming, which is an abstract task that is far from trivial. This is aggravated by the fact that, for decades, most tools have had their own programming language, which in most cases was limited and hard to use, as well as a programming environment providing little support for the development and comprehension of algorithmic descriptions. Examples include ArchiCAD’s GDL (1983); AutoCAD’s AutoLisp (1986) and Visual Lisp (2000); 3D Studio Max’s MAXscript (1997); and Rhinoceros 3D’s Rhino.Python (2011) and RhinoScript (2007). 

To make AD more appealing to architects and approximate it to the visual nature of architectural design processes, visual-based AD environments have been released in the meantime. In these environments, text-based algorithmic descriptions are replaced by iconic elements that can be connected to each other in dataflow graphs [6]. Generative Components (2003) is a pioneering example that inspired more recent ones such as Grasshopper (2007) and Dynamo (2011). These tools offer a database of pre-defined operations (components) that users can access by simply dragging an icon onto the canvas and providing it with input parameters. For standard tasks covered by existing components, this speeds up the modelling task considerably. Furthermore, since programs are represented by graph structures – with nodes describing the functions, and the wires connecting them describing the data that gets transferred between them – it is easy to see which parts of the algorithm are dependent upon others, and thus, where the changes are propagated to. However, this is only true for small algorithms, which are a rare find in visual-AD descriptions [7]. Therefore, despite solving part of the existing problems – which explains the growing popularity of this paradigm in the community – others have emerged, such as its inability to deal with more complex and larger-scale AD solutions [5,8,9]. 

In sum, AD remains challenging for most architects and a far cry from the representation methods they typically use. Human comprehension relies on concrete instances to create mental models of complex concepts [10]. Contrastingly, AD, either visual or textual, operates at a highly abstract level. This grants it its flexibility but also hinders its comprehension. 

Algorithmic Abstractness Vs Model Concreteness 

Abstraction can be regarded as the process of removing detail from a representation and keeping only the relevant features [11]. Some authors believe abstraction improves productivity: it not only focuses on the “big idea” or problem to solve [12] but also triggers creative thinking due to its vagueness, ambiguity, and lack of clarity [13].  

Abstraction in architecture can be traced back at least as far as classical antiquity. Architectural treatises, such as Vitruvius’ “Ten Books on Architecture” [14], are prime examples of abstract representations because they intend to convey not specific design instances, but rather design norms that are applicable to many design scenarios. However, the human brain is naturally more attracted to graphical explanations than textual ones [15–17], a tendency that is further accentuated in a field with a highly visual culture such as architecture. For that reason, even the referred treatises were eventually illustrated after the birth of the printing press [18]. 

The algorithmic nature of AD motivates designers to represent their ideas in an abstract manner, focusing on the concept and its formal definition. This sort of representation provides great flexibility to the design process, as a single expression of an idea can encompass a wide range of instances that match that idea, i.e., a design space. Contrariwise, most representation methods, including CAD and BIM, compel designers to rapidly narrow down their intentions towards one concrete instance, on account of the labour required to maintain separate representations for each viable alternative. 

In sum, abstraction gives AD flexibility and the ability to solve complex problems, but it also makes it harder to understand. Abstraction is especially relevant when dealing with mathematical concepts, such as recursion or parametric shapes; nature-inspired processes, such as randomness; and performance-based design principles, such as design optimisation. It is also critical when developing and fabricating unconventional design solutions, whose geometric complexity requires a design method with a higher level of flexibility and accuracy. Sadly, these are also the hardest concepts to grasp without concrete instances and visual aid. 

Nevertheless, the described comprehension barrier, apparently imposed by the abstract-concrete dichotomy, is more obvious when the AD descriptions are independent entities with little to no connection to the outcomes they produce. Figure 1 represents the current conception of AD: there is a parametric algorithm, representing a design space, which can generate a series of design models when specific parameters are provided. We propose to overthrow this notion by including the outcomes of the algorithm in the design process itself, changing the traditional flow of design creation to accommodate more design workflows and comprehension approaches.   

Figure 1 – AD workflow – an algorithm, representing a design space, generates a digital model for each design instance. 

Algorithmic Representation Space 

AD descriptions have an abstract nature, which is part of the reason they prove so beneficial to the architectural design process. However, when it comes to comprehending an AD – i.e., creating a mental model of the design space it represents – this feature becomes a burden. Human cognition seems to rely heavily on the accumulation of concrete examples to form a more abstract picture [10]. For this reason, we advocate that, for a better comprehension of an AD, the algorithms themselves do not suffice.  

This research proposes a new way to represent algorithmic descriptions that aids the development and understanding of AD projects. Under the name of Algorithmic Representation Space (ARS), this concept encompasses not only the algorithm but also its outcomes and the mechanisms that allow for the understanding of the design space it represents. AD descriptions stand to benefit significantly from the concreteness of the outputs they generate, i.e., the digital models. If we consider the models as part of the AD representation, we reduce its level of abstraction and increase its understandability, approximating it to the visual nature of human understanding. Nevertheless, we must also smooth its integration in more traditional design workflows, helping architects who still develop their models manually in digital design tools or are forced to use pre-existing models. Accordingly, the proposed ARS also enables the use of already existing digital models as starting points to arrive at an algorithmic description. 

There are two core elements in the ARS (Figure 2), the algorithm and the model. The algorithm represents a design space in a parametric abstract way, which makes the multiple design alternatives it represents difficult to perceive. Contrastingly, each model represents an instance of a design space in a static but concrete way. Combining the former’s flexibility with the latter’s perceptibility is therefore critical for the success of algorithmic representation. For conceptual reasons, the presented illustration of the ARS levels the two elements. Nevertheless, one must keep in mind that the algorithm can generate potentially infinite digital models, and the concept holds for all of them.  

We consider two entry points into the ARS: programming and modelling. Each will allow architects to traverse the ARS; in the former case, from algorithm to model, by running the instructions in the algorithm to generate a model; and in the latter, from model to algorithm, by extracting an algorithmic description capable of generating the design instance and then refactoring that description to make it parametric as well. In either case, it is important the ARS contemplates the visualisation of these algorithm-model relationships. Therefore, we propose including techniques such as traceability in any ARS. In the following section, we will use a case study, the Reggio Emilia Train Station by Santiago Calatrava, to illustrate the ARS and each of the proposed principles. 

Figure 2 – Building blocks of the ARS. 

Programming 

The typical AD process entails the creation of a parametric description that abstractly defines a design space according to the boundaries set by the architect (Figure 3). The parametricity of this description, or the size of the design space it represents, varies greatly with the design intent and the way it is implemented (e.g., degrees of freedom, rules, and constraints). By instantiating the parameters in the algorithm, the architect specifies instances of the design space, whose visualisation can be achieved by generating them in a digital design tool, such as a CAD, BIM, or game engine (Figure 3 – running the algorithm). Figure 4 presents several variations of the Reggio Emilia station achieved by running the corresponding AD description with varying input parameters, namely with a different number of beams, different beam sizes, and different amplitudes and phases of the sinusoidal movement. 

Given the flexibility of this approach, the process of developing AD descriptions tends to be a very dynamic one, with the architect repeatedly generating instances of the design to assess the impact of the changes made at each stage. Consciously or not, architects already work in a bidirectional iterative way when using AD. However, this workflow can also greatly benefit from a more obvious showcasing of the existing relations between algorithm and model. Traceability mechanisms allow precisely for the visual disclosure of these relations (i.e., which instruction/component generated which geometry), and several AD tools support them already. 

A picture containing timeline

Description automatically generated
Figure 3 – Entering the ARS by programming. 
Figure 4 – Parametric variations of the Reggio Emilia station, with different numbers and sizes of beams, and different amplitudes and signs of the sinusoidal movement. 

Creating Models 

AD is not meant to replace other design approaches but, instead, to interoperate with them. This interoperability is important, to take advantage of the investment made into those well-established representation methods such as CAD and BIM, especially for projects where digital models already exist or are still being produced. Therefore, the second entry point to the ARS is the conversion of an existing digital model of a design into an AD program. This might be necessary, for instance, when we wish to optimise it for new uses and/or to comply with new standards [19]. This process entails crossing the ARS in the opposite direction to that described in the previous section (Figure 5). 

To convert a digital model into an AD description, there are two main steps: extraction and refactoring. Extraction entails the automatic generation of instructions that can reproduce an exact copy of the model being extracted. The resulting AD description, however, is non-parametric and of difficult comprehension. This is where refactoring comes in [20,21], a technique that helps to improve the AD description, increasing its readability and parametricity. While the first task can be almost entirely automated, and is currently partially supported by some AD tools, the second part depends heavily on the architect’s design intent and, thus, will always be a joint effort between man and machine. In either case, it is important that the ARS adapts to the multiplicity of digital design tools and representation systems that architects often use during their design process. They can use, for instance, 3D modelling tools, such as CADs or game engines, to geometrically explore their designs more freely, or BIM tools to enrich the designs with construction information and to produce technical documentation.  

Figure 5 – Entering the ARS through modelling. 

Navigating the ARS 

As mentioned in the previous section, there are two main elements in the ARS: algorithms abstractly describing design spaces and digital models representing concrete instances of those design spaces. Either one can be accessed from either end of the spectrum, i.e., by programming and running the algorithm to generate digital models, or by manually modelling designs and then converting them into an algorithm. To allow for this bidirectionality between the two sides, the ARS relies on three main mechanisms: (a) traceability, (b) extraction, and (c) refactoring. The first allows the system to expose the existing relationships between algorithm and model in a visual and interactive way for a better comprehension of the design intent. The latter two allow us to traverse the ARS from model to algorithm, a less common crossing but an essential one, nevertheless. The following sections describe these three mechanisms in detail. 

Traceability 

For a proper comprehension of ADs, architects must construct a mental model of the design space, comprehending the impact each part of the algorithm has in each instance of the design space. To that end, a correlation must be ever present between the two core elements of the ARS – algorithm and model – matching the abstract representation with its concrete realisation. Traceability establishes relationships amongst the instructions that compose the algorithm and the corresponding geometries in the digital model. This is particularly relevant when dealing with complex designs, as it allows architects to understand which parts of the algorithm are responsible for generating which parts of the model.  

With traceability, users can select parts of the algorithm or parts of the model and see the corresponding parts highlighted in the other end. Grasshopper for Rhinoceros 3D and Dynamo for Revit, two visual AD tools, present unidirectional traceability mechanisms from the algorithm to the model. Figure 6 shows this feature at play in Grasshopper: users select any component on the canvas and the corresponding geometry is highlighted in the visualised model. 

Diagram

Description automatically generated
Figure 6 – Traceability in visual AD tools – the case of Grasshopper. 

Regarding bidirectional traceability, there are already visual AD tools that support it, such as Dassault Systèmes’ xGenerative Design tool (xGen) for Catia and Bentley’s Generative Components, as well as textual AD tools, such as Rosetta [22], Luna Moth [23], and Khepri [24]. Figure 7 shows the example of Khepri, where the user selects either instructions in the algorithm or objects in the model and the corresponding part is highlighted in the model or algorithm, respectively. Programming In the Model (PIM) [25], a hybrid programming tool, offers traceability between the three existing interactive windows: one showing the model, another the visual AD description, and a third showing the equivalent textual AD description. 

Unfortunately, traceability is a computationally intensive feature that hinders the tools’ performance with complex AD programs – especially model-to-algorithm traceability, which explains why some commercial visual-based AD tools avoid it. Those that provide it inevitably experience a decrease in performance as the model grows. All referred text-based and hybrid options are academic works, built and maintained as proof of concept and not as commercial tools, which explains their acceptance of the imposed trade-offs. A possible solution for this problem is to allow architects to decide when to use this feature and only switch it on when the support provided compensates for the computational overhead [26]. In fact, traceability-on-demand is Khepri’s current approach to the problem. 

Text

Description automatically generated with low confidence
Figure 7 – Traceability in textual AD tools – the case of Khepri. 

Extraction 

Extraction is the automatic conversion of a digital model into an algorithm that can faithfully replicate it. Previous studies [27,28] focused on the generation of 3D models from architectural plans or on the conversion of CAD to BIM models, using heuristics and manipulation of geometric relations. Sadly, the result is not an AD description, but rather another model, albeit more complex and/or informed. One promising line of research is the use of the probabilistic and neural-based machine learning techniques (e.g., convolutional or recurrent neural networks) that address translation from images to textual descriptions, [29] but further research is needed to generate algorithmic descriptions. 

The main problems with extracting a parametric algorithm lie, first, in the assumptions the system would need to make while reading a finished model: for instance, distinguishing whether two adjacent volumes are connected by chance or intentionally and, if the latter, deciding if such connection should constitute a parametric restriction of that model or not. Secondly, it is nearly impossible to devise a system that can consider the myriad of possible geometrical entities and semantics available in architectural modelling tools. 

Some modelling tools that favour the VP paradigm avoid this problem by placing the responsibility on the designer from the very start, restricting the modelling workflow and forcing the designer to provide the missing information. In xGen and Generative Components, the 3D model and the visual algorithm are in sync, meaning changes made in either one are reflected in the other. PIM presents a similar approach, extending the conversion to the textual paradigm as well, although it was only tested with simple 2D examples.  

In practice, these tools offer real-time conversion from the model to the algorithm. However, either solution requires the model to be parametric from the start. Every modelling operation available in these tools has a pre-set correspondence to a visual component, and designers must build their models following the structured parametric approach imposed by each tool, almost as if they were in fact constructing an algorithm but using a modelling interface. As such, the system is gathering the information it needs to build parametric relations from the very beginning. This explains why neither xGen, nor Generative Components, nor PIM, can take an existing model created in another modelling software or following other modelling rules and extract an algorithmic description from it. 

This problem has also been addressed in the TP field and promising results have been achieved in the conversion of bi-dimensional shapes into algorithms [24,30]. However, further work is required to recognise 3D shapes, namely 3D shapes of varying semantics, since architects can use a myriad of digital design tools to produce their models, such as CADs, BIMs, or game engines. Figure 8 presents an ideal scenario, where the ARS is able to extract an algorithm that can generate an identical model to that being extracted. 

In either case, even if we arrive at the extraction of the most common 3D elements any time soon, the resulting algorithm will only accurately represent the extracted model, and it will comprise a low-level program, which is very hard for humans to understand. To make the algorithm both understandable and parametric, it needs to be further transformed according to the design intent envisioned by the architect. Increasing the algorithm’s comprehension level and the design space it represents is the goal of refactoring. 

Diagram

Description automatically generated with medium confidence
Figure 8 – Extraction process – on the left the digital model, and on the right the sequence of instructions resulting from the extraction process. 

Refactoring 

Refactoring (or restructuring) is commonly defined as the process of improving the structure of an existing program without changing its semantics or external behaviour [20]. There are already several semi-automatic refactoring tools [21] that help to improve the readability and maintenance of algorithmic descriptions and increase their efficiency and abstraction level. Refactoring is an essential follow-up to an extraction process, since the latter returns a non-parametric algorithm that is difficult to decipher. 

Figure 9 shows an example of a refactoring process that could take place with the algorithm extracted in Figure 8. The extracted algorithm contains numerous instructions, each responsible for generating a beam between two spatial locations defined by XYZ coordinates. It is not difficult to infer the linear variations presented in the first and fourth highlighted columns, which correspond to the points’ X values. To infer the sinusoidal variation in the remaining values, however, more complex curve-fitting methods would have to be implemented [31]. 

In either case, refactoring tools seldom work alone, meaning that a lot of user input is required. This is because there is rarely a single correct way of structuring algorithms, and the user must choose which methods to implement in each case. Refactoring tools, beyond providing suggestions, guarantee that the replacements are made seamlessly and do not change the algorithm’s behaviour. When trying to increase parametric potential, even more input is required, since it is the architect who must decide the degrees of freedom shaping the design space. 

In our example (Figure 9), the refactored algorithm shown below has a better structure and readability but is still in an infant state of parametricity. As a next stage, we could start by replacing the numerical values proposed by the refactoring tool with variable parameters to allow for more variations of the sinusoidal movement. 

Discussion and Conclusion 

Architecture is an ancient profession, and the means used to produce architectural entities have constantly changed, not only integrating the latest technological developments, but also responding to new design trends and representation needs. Architects have long adopted new techniques to improve the way they represent designs. However, while, for centuries, this caused gradual changes in the architectural design practice, with the more accentuated technological development witnessed since the 60s, these modifications have become more evident. The emergence of personal computers, followed by the massification of Computer-Aided Design (CAD) and Building Information Modelling (BIM) tools, allowed architects to automate their previously paper-based design processes [1], shaping the way they approached design issues [32]. However, these tools did little to change the way designs were represented, only making their production more efficient. It did not take long for this scenario to rapidly evolve with the emergence of more powerful computational design paradigms, such as Algorithmic Design (AD). Despite being more abstract and thus less intuitive, this design representation method is more flexible and empowers architects’ creative processes. 

Given its advantages for architectural design practice, AD should be a complement to the current means of representation. However, to make AD more appealing for a wider audience and allow architects to make the most of it, we must lower the existing barriers by approximating AD to the visual and concrete nature of architectural thinking. To that end, we proposed the Algorithmic Representation Space (ARS), a representation approach that aims to replace the current one-directional conception of AD (going from algorithms to digital models) with a bidirectional one that additionally allows architects to arrive at algorithms starting from digital models. Furthermore, the ARS encompasses as means of representation not only the algorithmic description but also the digital model that results from it, as well as the mechanisms that aid the comprehension of the design space it represents.  

A picture containing table

Description automatically generated
Figure 9 – Refactoring process – the sequence of extracted instructions (on top) is converted onto a more comprehensible and parametric algorithm (on the bottom). 

The proposed system is based on two fundamental elements – the algorithm and the digital model – and architects have two ways of arriving at them – programming and modelling. Considering the first case, programming, the ARS supports the development of algorithms and the subsequent visualisation of the design instances they represent by running the algorithm with different parameters. In the second case, modelling, the ARS supports the conversion of digital models into algorithms that reproduce them. The first scenario allows AD representations to benefit from the visual nature of digital design tools, reducing the innate abstraction of algorithms and obtaining concrete instances of the design space that are more perceptible to the human mind. The second case enables the conversion of a concrete representation of a design instance into an abstract representation of a design space, i.e., a parametric description that can generate possible variations of the original design, benefiting from algorithmic flexibility and expressiveness in future design tasks.  

To allow for this bidirectionality, the ARS relies on three main mechanisms: (a) traceability, (b) extraction, and (c) refactoring. Traceability addresses the non-visual nature of the first process – programming – by displaying the relationships between the algorithm and the digital model. Extraction and refactoring address the complexity of the second process – going from model to algorithm – the former entailing the extraction of the algorithmic instructions that, when executed, generate the original design solution, and the latter solving the lack of parametricity and perceptibility of the extracted algorithms by helping architects restructure them. The result is a new representation paradigm with enough (1) expressiveness to successfully represent architectural design problems of varying complexities; (2) flexibility to parametrically manipulate the resulting representations; and (3) concreteness to easily and quickly comprehend the design space embraced.  

The proposed ARS intends to motivate a more widespread adoption of AD representation methods. However, it is currently only a theoretical outline. To reach its goal, the proposed system must gain a practical character. As future work, we will focus on applying and evaluating the ARS in large-scale design scenarios, while retrieving user feedback from the experience. 

Acknowledgments 

This work was supported by national funds through Fundação para a Ciência e a Tecnologia (FCT) (references UIDB/50021/2020, PTDC/ART-DAQ/31061/2017) and PhD grants under contract of FCT (grant numbers SFRH/BD/128628/2017, DFA/BD/4682/2020). 

References 

[1] S. Abubakar and M. Mohammed; Halilu, “Digital Revolution and Architecture: Going Beyond Computer-Aided Architecture (CAD)”. In Proceedings of the Association of Architectural Educators in Nigeria (AARCHES) Conference (2012)., 1–19.  

[2] R. Oxman, “Thinking difference: Theories and models of parametric design thinking”. Design Studies (2017), 1–36. DOI:http://doi.org/10.1016/j.destud.2017.06.001 

[3] K. Terzidis, “Algorithmic Design: A Paradigm Shift in Architecture ?” In Proceedings of the 22nd Education and research in Computer Aided Architectural Design in Europe (eCAADe) Conference, Copenhagen, Denmark (2004), 201–207. 

[4] I. Caetano, L. Santos, and A. Leitão, “Computational design in architecture: Defining parametric, generative, and algorithmic design.” Frontiers of Architectural Research 9, 2 (2020), 287–300. DOI:https://doi.org/10.1016/j.foar.2019.12.008 

[5] P. Janssen, “Visual Dataflow Modelling: Some thoughts on complexity”. In Proceedings of the 32nd Education and research in Computer Aided Architectural Design in Europe (eCAADe) Conference, Newcastle upon Tyne, UK (2014), 305–314 

[6] E. Lee and D. Messerschmitt, “Synchronous data flow”. Proceedings of the IEEE 75, 9 (1987), 1235–1245. DOI:https://doi.org/10.1109/PROC.1987.13876 

[7] D. Davis, “Modelled on Software Engineering: Flexible Parametric Models in the Practice of Architecture”. PhD Dissertation, RMIT University (2013). 

[8] A. Leitão and L. Santos, “Programming Languages for Generative Design: Visual or Textual?” In Proceedings of the 29th Education and research in Computer Aided Architectural Design in Europe (eCAADe) Conference, Ljubljana, Slovenia (2011),139–162. 

[9] M Zboinska, “Hybrid CAD/E Platform Supporting Exploratory Architectural Design”. CAD Computer Aided Design 59, (2015), 64–84. DOI:https://doi.org/10.1016/j.cad.2014.08.029 

[10] D. Rauch, P. Rein, S. Ramson, J. Lincke, and R. Hirschfeld, “Babylonian-style Programming: Design and Implementation of an Integration of Live Examples into General-purpose Source Code”. The Art, Science, and Engineering of Programming, 3, 3 (2019), 9:1-9:39. DOI:https://doi.org/10.22152/programming-journal.org/2019/3/9 

[11] H. Abelson, G.J. Sussman, and J. Sussman (1st ed. 1985), Structure and Interpretation of Computer Programs  (Cambridge, Massachusetts, and London, England: MIT Press, 1996) DOI:https://doi.org/10.1109/TASE.2008.40 

[12] B. Cantrell and A. Mekies (Eds.), Codify: Parametric and Computational Design in Landscape Architecture. (Routledge, 2018). DOI:https://doi.org/10.1017/CBO9781107415324.004 

[13] A. Al-Attili and M. Androulaki, “Architectural abstraction and representation”. In Proceedings of the 4th International Conference of the Arab Society for Computer Aided Architectural Design, Manama (Kingdom of Bahrain) (2009), 305–321. 

[14] M. Vitruvius, The Ten Books on Architecture. (Cambridge & London, UK: Harvard University Press & Oxford University Press, 1914). 

[15] K. Zhang, Visual languages and applications. (Springer Science + Business Media, 2007). 

[16] N. Shu, 1986, “Visual Programming Languages: A Perspective and a Dimensional Analysis”. In Visual Languages. Management and Information Systems, SK. Chang, T. Ichikawa and P.A Ligomenides (eds.). (Boston, MA: Springer, 1986). DOI: https://doi.org/10.1007/978-1-4613-1805-7_2 

[17] E. Do and M. Gross, “Thinking with Diagrams in Architectural Design”. Artificial Intelligence Review. 15, 1 (2001), 135–149. DOI:https://doi.org/10.1023/A:1006661524497 

[18] M. Carpo, The Alphabet and the Algorithm. (Cambridge, Massachusetts: MIT Press, 2011). 

[19] I. Caetano, G. Ilunga, C. Belém, R. Aguiar, S. Feist, F. Bastos, and A. Leitão, “Case Studies on the Integration of Algorithmic Design Processes in Traditional Design Workflows”. In Proceedings of the 23rd International Conference of the Association for Computer-Aided Architectural Design Research in Asia (CAADRIA), Hong Kong (2018), 129–138. 

[20] M. Fowler, Refactoring: Improving the Design of Existing Code. (Reading, Massachusetts: Addison-Wesley Longman, 1999) 

[21] T. Mens and T. Tourwe, “A survey of software refactoring”. IEEE Transactions on Software Engineering. 30, 2 (2004), 126–139. DOI:https://doi.org/10.1109/TSE.2004.1265817 

[22] A. Leitão, J. Lopes, and L. Santos, “Illustrated Programming”. In Proceedings of the 34th Annual Conference of the Association for Computer Aided Design in Architecture (ACADIA), Los Angeles, California, USA (2014), 291–300.  

[23] P. Alfaiate, I. Caetano, and A. Leitão, “Luna Moth Supporting Creativity in the Cloud”. In Proceedings of the 37th Annual Conference of the Association for Computer Aided Design in Architecture (ACADIA), Cambridge, MA (2017), 72–81. 

[24] M. Sammer, A. Leitão, and I. Caetano, “From Visual Input to Visual Output in Textual Programming”. In Proceedings of the 24th International Conference of the Association for Computer-Aided Architectural Design Research in Asia (CAADRIA), Wellington, New Zealand (2019), 645–654. 

[25] M. Maleki and R. Woodbury, “Programming in the Model: A new scripting interface for parametric CAD systems:”. In Proceedings of the Annual Conference of the Association for Computer Aided Design in Architecture (ACADIA), Cambridge, Canada (2013), 191–198. 

[26] R. Castelo-Branco, A. Leitão, and C. Brás, “Program Comprehension for Live Algorithmic Design in Virtual Reality”. In Companion Proceedings of the 4th International Conference on the Art, Science, and Engineering of Programming (<Programming’20> Companion), ACM, New York, NY, USA, Porto, Portugal, (2020), 69–76. DOI:https://doi.org/10.1145/3397537.3398475 

[27] L. Gimenez, J. Hippolyte, S. Robert, F. Suard, and K. Zreik, “Review: Reconstruction of 3D building information models from 2D scanned plans”. Journal of Building Engineering 2, (2015), 24–35. DOI:https://doi.org/10.1016/j.jobe.2015.04.002 

[28] P. Janssen, K. Chen, and A. Mohanty, “Automated Generation of BIM Models”. In Proceedings of the 34th Education and research in Computer Aided Architectural Design in Europe (eCAADe) Conference, Oulu, Finland, (2016) 583–590. 

[29] J. Donahue, L. Hendricks, M. Rohrbach, S. Venugopalan, S. Guadarrama, K. Saenko, and T. Darrell, “Long-Term Recurrent Convolutional Networks for Visual Recognition and Description”. IEEE Transactions on Pattern Analysis and Machine Intelligence. 39, 4 (2017), 677–691. DOI:https://doi.org/10.1109/TPAMI.2016.2599174 

[30] A. Leitão and S. Garcia., “Reverse Algorithmic Design”. In Proceedings of Design Computing and Cognition (DCC’20) Conference, Georgia, Atlanta, USA (2021). p. 317–328. DOI: https://doi.org/10.1007/978-3-030-90625-2_18 

[31] P. Mogensen and A. Riseth, “Optim: A mathematical optimization package for Julia”. Journal of Open Source Software. 3, 24 (2018), 615. DOI:https://doi.org/10.21105/joss.00615 

[32] T. Kotnik, “Digital Architectural Design as Exploration of Computable Functions”. International Journal of Architectural Computing 8, 1 (2010), 1–16. DOI:https://doi.org/10.1260/1478-0771.8.1.1 

Suggest a Tag for this Article
Geocities’ neighbourhoods collage, 2022. Image credit: Alessandro Celli and Ibrahim Kombarji.
Geocities’ neighbourhoods collage, 2022. Image credit: Alessandro Celli and Ibrahim Kombarji.
Fostering Kinship: GeoCities’ Algorithmic Neighbourhoods
Algorithmic Neighbourhoods, civic participation, global village, Kinship, proximity, virtual city
Alessandro Celli, Ibrahim Kombarji

celli.alce@gmail.com
Add to Issue
Read Article: 3062 Words

The remains of a virtual city – possibly the first of its kind – can be found on servers all over the world.1 Geocities was launched as a series of districts, alleyways, and neighbourhoods where its inhabitants could build their own webpages. For the first time, the internet was given a structure in a way that its audience could relate to it on a human scale. Today, around 650 gigabytes of Geocities’s data remain accessible thanks to archiving efforts that ensured the recovery of some of the 38 million individual websites that existed at the time of GeoCities’ final closure in 2009. [2] [3] [4] [5] 

GeoCities was first launched in 1994 by David Bohnett and Dick Altman as a web hosting service, allowing its users to store and manage their website files. [6] Its initial name, Beverly Hills Internet, already hinted at the creators’ intention to develop a neighbourhood of websites, which would later mature into a geography of cities. The service offered a free plan with a generous two megabytes of storage to all users, known as the homesteaders, who were asked to choose a neighbourhood to reside in. [7] All of the city’s inhabitants occupied a defined space, in a defined surrounding, where their homepages were arranged within neighbourhoods. Each cluster of pages was spatially close to those which shared similar content, while each neighbourhood was defined by the broader topic into which they fit. As such, the company created and thematically organised its web directories into six neighbourhoods, which included Colosseum, Hollywood, RodeoDrive, SunsetStrip, WallStreet and West Hollywood. New neighbourhoods, as well as their suburbs, were later added as the site grew, and became part of the members’ unique web address with a sequentially assigned URL “civic address” (e.g., “www.geocities.com/RodeoDrive/54”). Chat rooms and bulletin boards were added soon after, fostering rapid growth of the city. [8] Each neighbourhood had its own forum, live chat, and even a list of all the homesteaders who celebrated their birthday each day.  

By December 1995, when it changed its name to GeoCities, Beverly Hills Internet had over 20,000 homesteaders and over 6 million page-views per month. [9] Within this expansive organisation of web page clusters, a seamless sense of proximity between those who shared similar ideas naturally led to human behaviours such as kinship and affection between them.  

Neighbourhoods are intrinsic parts of our urban fabric and a self-evident manifestation of how the cities we live in are structured. [10] Yet, we still struggle to grasp a proper definition of their totality, given the complex layers within them. In 1926, progressive educator David Snedden defined the term neighbourhood as “those people who live within easy ‘hallooing’ distance”, illustrating it as a space where one can easily catch the attention of another. [11] 

This essay will explore the notion of an algorithmic neighbourhood, one that reflects – and derives from – parts of a physically built, “hallooing” urban neighbourhood. The internet lexicon of today descends seamlessly from a long lineage of architectural and spatial terminologies, such as firewall, coding architecture, homepage, platform, address, path, room, and location, among many others. In the translation from a physical reality that is shaped within our Latourian “critical zone”, some of these terminologies have shifted in their meaning when applied to new forms of digital space. [12] A parallel “digital critical zone” is generated, within which these algorithmic neighbourhoods sit.  

Figure 1 – Archived webpage “Tia”, West Hollywood neighbourhood. Image capture April 16 2022. Source: https://geocities.restorativland.org/WestHollywood/Cafe/3232/newpics.html
Figure 2 – Archived webpage “The Gardening Girl”, Picket Fence neighbourhood. Image capture April 16 2022. Source: https://geocities.restorativland.org/PicketFence/1054/

Neighbourhood as a site of kinship and proximity  

The artisanal web built through GeoCities allowed “user-generated content”, which had not yet adorned itself with pompous names or revolutionary pretensions. [13] It proved that even before the invention of Web 2.0 – which was later aimed at implementing social-media profiles – the web was, above all, a story of human beings who interact with one another and discuss the subjects close to them through the means at hand.  

Urban studies professor Looker defines the United States as a nation of neighbourhoods. [14] This essay expands on this exposure of the continental urban fabric by exploring the communities of algorithmic kinship that exist within GeoCities’ virtual borders. Similar to physically built neighbourhoods, GeoCities’ urban structure fostered kinship and affection among its inhabitants. PicketFence, for example, was built to allow residents to share tips and advice on ‘Home Improvement Techniques’. The more experienced ‘Home Improvement’ users became the neighbourhood’s go-to people for navigating daily issues, reinforcing a shared communal knowledge. [15] 

West Hollywood, which was subdivided into “Gay, Lesbian, Bisexual, and Transgender topics”, is another example of such algorithmic kinship. This neighbourhood was a predecessor of today’s social-media spaces where users can gather and exchange (sometimes hidden or undisclosed) realities across communities. West Hollywood’s users could leave messages, sign a guestbook, and share contact information with one another. The neighbourhood gave people an opportunity to share similar experiences and daily struggles, form alliances with other communities, and tackle queer rights collectively. Moreover, West Hollywood fostered arenas of “block-level solidarity”, where “bonds and loyalties – whether as enacted on real-life pavements or as represented in stories, images, and speeches”, allowed connections between the intimate lives of users, their GeoCities pages, and the “city block”. [16] 

Proximity and reciprocal kinship were thus a foundational feature of GeoCities’ design: individuals, together with their personal pages, were at the centre of the Internet. In contrast, today’s platforms and digital services are structured in such a nested way that proximity is sometimes inconceivable, and individuals are reduced to anonymous consumers of information. Today, the information communications technology industry (ICT) is at the centre of the Internet. [17] Social media platforms still provide virtual spaces that allow communities to gather and share content with one another, fostering a certain degree of human interaction. However, the very structure within which they operate is fundamentally different from the ones used in early platforms such as GeoCities. While before, the digital matter – text, images, links – was spatially placed onto the transparent structure of the webpage, and you could clearly see the location of a jpeg file within the HTML lines of code, now it all runs through opaque interfaces. [18] These perfect facades are quasi-impenetrable for users, and hide the “black boxes” where algorithms operate as instruments of measurements and perception. [19] As a counterpart to algorithmic neighbourhoods, Caroline Busta defines social-media platforms as a grand bazaar, “with lanes of kiosks, grouped roughly by trade, displaying representative works to passers-by. At the back of the mini-shop is a trap door with stairs leading to a sub-basement where deals can be done”. [20] This multi-layered opaque architecture of the bazaar illustrates the complex structure that currently governs social-media platforms. In contrast, the algorithmic neighbourhoods of GeoCities attempted to encourage a transparent vision of the modes of portraiture in the digital realm, and defined tools for users to relate directly to it. 

Figure 3 – Archived webpage “Gay Ukraine International, Kiev, UA”, West Hollywood neighbourhood. Image capture April 16 2022. Source: https://geocities.restorativland.org/WestHollywood/Club/1213/
Figure 4 – Archived webpage “Welcome to the deep Heart of TEXAS and Our Home”, Picket Fence neighbourhood. Image capture April 16 2022. Source: https://geocities.restorativland.org/PicketFence/1011/

Neighbourhood as a site constantly ‘under construction’   

A digital archaeologist scavenging through GeoCities’ remains would come across a vast number of “under construction” signs strewn across the neighbourhood’s alleys, outlining its “work-in-progress” state. Surrounded by virtual scaffolding, the pages under construction were built, line after line of code, by the homesteaders, slowly undergoing organic changes and upgrades. Each individual page was constructed by its creator, from its foundations to its decorative elements, in the HTML format – the HyperText Markup Language. The coding language not only allowed users to build their pages from scratch, but also to introduce multimedia resources such as JPGs and GIFs. A page under construction implies that there was a process of creation, which aimed at an eventual final form. Similar to a construction site, the individual web page could be openly observed throughout its making, as it could be visited by GeoCities inhabitants at any moment in time. It was a facade yet to come; a page that was shaped by the algorithmic manipulation of its users as they added another ‘about me’ section, a ‘guestbook’ to be signed, or a photo gallery of low-res pictures – to fit within the 2 megabytes limit – portraying their personal lives. 

Differently, the architecture of new forms of webpages and content aggregators is now conceived with an opaque algorithmic structure. Their virtual space is not one of proximity and distance based on intelligible parameters, but one of hierarchical appearance and disappearance based on unintelligible instruments of perception. [21] For instance, Google’s page-ranking algorithm mutates and evolves over time, leaving no traces behind, except the ones it uses to train itself. When presented with Google search results, users are faced with a series of temporary choices that are the result of a very intricate mechanism of automatic selection and classification. Vladan Joler defines algorithms as “instruments of measurements and perception”; thus, algorithmic architecture can be outlined by an operation of the more-than-human. Data collection and consumer profiling are the parameters upon which the current Internet is being built, instead of it being a conscious construction process carried out by its users. 

While the architectural backdrop of a platform is constantly being redefined based on who is interacting with it, its facade – the interface – is pure and familiar. This interface which we constantly visit, however, obscures what’s beneath it. Even if it is a clear manifestation of rules, as it tells you what you can or cannot do, it does not reveal through which mechanisms it gathers and conveys information, nor how the user’s actions are exploited for profitable means. The algorithmic design of GeoCities, based on neighbourhood alliances, had not yet allowed for this opacity, avoiding instances of power structures, black boxes, and opaque interfaces. It also avoided entering the black hole of rhizomatic surveillance that now permeates the virtual realm. [22] [23]  

Algorithmic neighbourhoods can also help to expose the physical infrastructure hosting them. Similarly, to the opaqueness of interfaces, our built neighbourhoods are shaped by an underground infrastructure of fleshly cables and routers. Data centres, globally connected by a web of cables, host our digital selves, which wander through the unmeasurable geographies of the Internet. They are out of reach, transcending any geographical boundary, as they mirror the ubiquitous nature of algorithmic spaces. Cables and data centres are, in fact, the physical side of the Internet, its thickness on our planet. They are the physical neighbourhood mirroring the algorithmic one, hosting the latter through servers, cables, connections, and energy. The physical neighbourhood which creates the digital infrastructure is not, however, a direct reflection of the algorithmic one. It is instead expansive, ubiquitous, fragmented, and absent, as it is designed to operate under strict safety protocols and privacy regulations.  

Figure 5 – Archived webpage “Q Pals”, West Hollywood neighbourhood. Image capture April 16 2022. Source: https://geocities.restorativland.org/WestHollywood/Cafe/3113/
Figure 6 – Archived webpage “Monica Munro”, West Hollywood neighbourhood. Image capture April 16 2022. Source: https://geocities.restorativland.org/WestHollywood/Club/2788/

Neighbourhood as a site of civic participation and resistance  

In June 1998, in order to boost brand awareness and advertising impressions, GeoCities introduced a watermark on its users’ web pages. [24] The watermark, much like an on-screen graphic on some TV channels, was a transparent floating GIF image that used JavaScript to stay displayed at the bottom right of the browser window. Many users felt that the watermark interfered with their website design, and threatened to move their pages elsewhere. A year later, in 1999, Yahoo bought the platform and consequently implemented its “Terms of Service agreement” leading to a unanimous reaction by the homesteaders. [25] The “Haunting of GeoCities” was the users’ response to the threat over content rights and access control. Each neighbourhood became a ghost town, where homepages were stripped of their content and colours, replaced with excerpts of the offending Terms of Service. As authors Reynolds and Hallinan point out, “users sensed that Yahoo’s unfettered access to this content threatened their creative control and diluted their power to make decisions about how and where to display their content. … some enterprising homesteaders sought to foil Yahoo’s legal and digital access to their intellectual property by removing it from the service altogether”. [26] The collective operation, moreover, represented a strategic mobilization of GeoCities design, defined by co-founder David Bohnett as “a bottoms-up, user-generated content mode”. [27] [28] The homesteaders’ remarkable political response allowed them to preserve a certain degree of control over their content, interfering with the dominating “Terms of Service agreement” which regulates, even more so today, every action we take within a platform. 

The “Haunting” protest represented a point of resistance towards the tendency of tech-giants to channel social traffic through a corporate digital platform ecosystem – a ubiquitous model in today’s internet. [29] The organized response by the homesteaders was only possible by the virtue of the very architecture of GeoCities. Neighbourhoods allowed a bottom-up response that could contrast the overarching corporate control put in place by Yahoo. It was a gathering that was empowered by proximity and affection, while it could exploit the temporary nature of the homepages’ construction as a medium for political change. In 2009, in response to the termination of GeoCities by Yahoo, new mechanisms of neighbourly rebuttal emerged. The German hosting provider JimdoWeb, for instance, attempted to host the nomad homesteaders by launching the Lifeboat for GeoCities webpage. Simultaneously, efforts of internet archivists started to meticulously archive each homepage of GeoCities in a countering act to preserve memory and gather residues of the city. 

The archived remains of the virtual city stand as an alternative approach to the complexity and opaqueness of the algorithmic layering of contemporary web-hosting services, as much as they reveal the ‘trans-scalar’ infrastructure of the Internet. [30] These neighbourly entanglements help us make sense of the current digital “global village”, offering an entry point to analyse how it is being shaped by the effects of globalisation, market economies, and imprudent media. [31] [32] Moreover, they display how the global village is being governed by algorithmic interdependencies, which in turn affect the architectural formations in both virtual and physical realities. [33]  

Figure 7 – Archived webpage “Gay Denton”, West Hollywood neighbourhood. Image capture April 16 2022. Source: https://geocities.restorativland.org/WestHollywood/Cafe/1979/Pages/gaydenton.html
Figure 8 – Geocities’ neighbourhoods collage, 2022. Image credit: Alessandro Celli and Ibrahim Kombarji.

References

[1] Archive Team. Archiveteam.org. https://wiki.archiveteam.org/index.php?title=Main_Page (accessed April 16, 2022).

[2] R. Vijgen. “The Deleted City”, http://www.deletedcity.net/, (2017)

[3] Restorativland, “The Geocities Gallery”, https://geocities.restorativland.org/, (accessed March 1, 2022).

[4] “OoCities”, https://www.oocities.org/#gsc.tab=0, (accessed March 1, 2022).

[5] O. Lialina & D. Espenschied, “One Terabyte of Kilobyte Age”, Rhizome.org. https://anthology.rhizome.org/one-terabyte-of-kilobyte-age, (accessed March 1, 2022).

[6] A.J. Kim, Community Building on the Web: Secret Strategies for Successful Online Communities (United Kingdom: Pearson Education, 2006).

[7] B. Sawyer, D Greely, Creating GeoCities Websites, (Cincinnati, Ohio: Muska & Lipman Pub, 1999) .

[8] Ibid.

[9] C. Bassett, The arc and the machine: Narrative and new media, (Manchester: Manchester University Press, 2013).

[10] J. Jacobs, “The City: Some Myths about Diversity”, The death and life of great American cities, (New York: Random House, 1961).

[11] R. Sampson, “The Place of Context: A Theory and Strategy for Criminology’s Hard Problems”, Criminology 51 (The American Society of Criminology, 2013).

[12] B. Latour, Critical Zones: The Science and Politics of Landing on Earth, (Cambridge, MA: MIT Press, 2020).

[13]  B. Sawyer, D Greely, Creating GeoCities Websites, (Cincinnati, Ohio: Muska & Lipman Pub, 1999).

[14] B. Looker, A Nation of Neighborhoods: Imagining Cities, Communities, and Democracy in Postwar America, (Chicago: The University of Chicago Press, 2015).

[15] Ibid.

[16] Ibid.

[17] C. Busta, “Losing Yourself in the Dark”. Open Secret, KW Institute for Contemporary Art, https://opensecret.kw-berlin.de/essays/losing-yourself-in-the-dark/, (accessed April 16, 2022).

[18] S.U. Noble, Algorithms of Oppression: How Search Engines Reinforce Racism,. (United States: NYU Press, 2018).

[19] V. Joler, “New Extractivism”, Open Secret, KW Institute for Contemporary Art, https://opensecret.kw-berlin.de/artwork/new-extractivism/, (accessed April 16, 2022).

[20]  C. Busta, “Losing Yourself in the Dark”. Open Secret, KW Institute for Contemporary Art, https://opensecret.kw-berlin.de/essays/losing-yourself-in-the-dark/, (accessed April 16, 2022).

[21]  V. Joler, “New Extractivism”, Open Secret, KW Institute for Contemporary Art, https://opensecret.kw-berlin.de/artwork/new-extractivism/, (accessed April 16, 2022).

[22] D. Savat, “(Dis)Connected: Deleuze’s Superject and the Internet”, International Handbook of Internet Research, 423–36 (Dordrecht: Springer, 2009).

[23] K.D. Haggerty, R. Ericson, “The Surveillant Assemblage”. British Journal of Sociology, 51, 4, 605-622, (United Kingdom: Wiley-Blackwell for the London School of Economics, 2000).

[24] J. Hu, “GeoCitizens fume over watermark”, CNet.com, https://www.cnet.com/tech/services-and-software/geocitizens-fume-over-watermark/ (accessed March 1, 2022).

[25] R. Ku, Cyberspace Law: Cases and Materials, (New York: Wolters Kluwer, 2016).

[26] C. Reynolds, B. Hallinan, “The haunting of GeoCities and the politics of access control on the early Web”, New Media & Society, (United States: SAGE Publishing, 2021).

[27] Ibid.

[28] B McCullough, “Interview with David Bohnett, founder of GeoCities”. Internet History Podcast, http://www.internethistorypodcast.com/2015/05/david-bohnett-founder-of-geocities/, (accessed April 16, 2022).

[29] J. Van Dijck, T. Poell, M. De Waal, The Platform Society: Public Values in a Connective World, (Oxford: Oxford University Press, 2018).

[30] A. Jaque, Superpowers of Scale, (New York: Columbia University Press, 2020).

[31] M. McLuhan, The Gutenberg galaxy: the making of typographic man (Toronto: University of Toronto Press, 1962).

[32] T. Friedman, The World Is Flat: A Brief History of the Twenty-First Century, (New York: Farrar, Straus and Giroux, 2005). [1] M. McLuhan, The Gutenberg galaxy: the making of typographic man, (Toronto: University of Toronto Press, 1962).

Suggest a Tag for this Article
Fig. 2 Norman Foster’s sketch illustrates the generative process: each floor is rotated by 5° relative to the one below around the central core with the pillars, bearing the vertical loads, the services, the stairs, and the lifts. From the core, six ‘spokes’ host the floorspace at each level. Each floorspace is detached from the next by a void triangular area about 20° wide. The vertically open areas create light wells for the height of the tower, up to the thirty-second floor. These open areas wound in coils to flow ventilation and natural lighting inside the building.
Fig. 2 Norman Foster’s sketch illustrates the generative process: each floor is rotated by 5° relative to the one below around the central core with the pillars, bearing the vertical loads, the services, the stairs, and the lifts. From the core, six ‘spokes’ host the floorspace at each level. Each floorspace is detached from the next by a void triangular area about 20° wide. The vertically open areas create light wells for the height of the tower, up to the thirty-second floor. These open areas wound in coils to flow ventilation and natural lighting inside the building.
The Architect and the Digital: Are We Entering an Era of Computational Empiricism? 
architectural design theory and practice, case study/studies, design education, design methods, digital design, parametric design
giovanni corbellini, Luca Caneparo

giovanni.corbellini@polito.it
Add to Issue
Read Article: 3887 Words

The close integration of design with computational methods is not just transforming the relationships between architecture and engineering; it also contributes to reshaping modes of knowledge development. This paper critically probes some issues related to this paradigm shift and its consequences on architectural practice and self-awareness, looking at the potential of typical teaching approaches facing the digital revolution. The authors, who teach an architectural design studio together, coming from different backgrounds and research fields, probe the topic according to their respective vantage points. 

Over the last few decades, a design agency has developed of using digital tools for the interactive generation of solutions by dynamically linking analytic and/or synthetic techniques. 

The analytic techniques make use of simulation, of the capability to forecast certain aspects of building performance. While in conventional practice simulation usually plays a consulting role in the later stages of the design process, in the new forms of agency it works as a generative device from the earlier phases. 

The synthetic techniques address, on the other hand, more organic, para-biologic concepts – for instance “emergence, self-organization and form-finding” – looking for “benefits derived from redundancy and differentiation and the capability to sustain multiple simultaneous functions”. [1] 

Structures and their conception stand out as a part of architectural design where the digital impact shows its clearest consequences. Candela, Eiffel, Nervi and Torroja considered for instance that calculations have to go in parallel with intuitive understanding of the form: “The calculation of stresses”, writes Torroja, “can only serve to check and to correct the sizes of the structural members as conceived and proposed by the intuition of the designer”. [2] “In this fundamental phase of design”, Nervi adds, “the complex formulas and calculation methods of higher mathematics do not serve. What are essential, however, are rough evaluations based on simplified formulas, or the ability to break down a complex system into several elementary ones”. [3] At the time, the computational aspects were overridingly cumbersome; Frontón Recoletos required from Torroja one hundred and fifty-eight pages of calculations with approximate methods. Classical analytical procedures provided limited tools for simulation: “It was mandatory for the engineer to supplement his analyses with a great deal of judgment and intuition accumulated over years of experience. Empiricism played a great role in engineering design; while some general theories of mechanical behaviour were available, methods for applying them were still under development, and it was necessary to fall back upon approximation schemes and data taken from numerous tests and experiments”. [4] 

After the epoch of Nervi and Torroja, research and practice have been deeply influenced by the combined actions of computation toward a unifying approach to the different theories in mechanics, thanks to exponential performance improvements in the hardware, as well as achievements in symbolic and matrix languages, and discretization methods (e.g., boundary and finite elements methods) implemented in software. At present, the wide availability of computational methods and tools can produce numerical simulations out of complex forms, with the expectation of providing a certain degree of knowledge and understanding of mechanics, energetics, fluids, and acoustics. The compelling possibilities of boundary or finite element methods, plus finite difference or volume methods, has produced a shift from science of construction pioneers’ awareness that not everything can be built, [5] to the “unprecedented morphology freedom” of the present. [6] Therefore, “We are limited in what we can build by what we are able to communicate. Many of the problems we now face”, as Hugh Whitehead of Foster and Partners points out, “are problems of language rather than technology. The experience of Swiss Re established successful procedures for communicating design through a geometry method statement”. [7] 

 “Parametric modelling”, Foster and Partners stated, “had a fundamental role in the design of the tower. The parametric 3D computer modelling process works like a conventional numerical spreadsheet. By storing the relationships between the various features of the design and treating these relationships like mathematical equations, it allows any element of the model to be changed and automatically regenerates the model in much the same way that a spreadsheet automatically recalculates any numerical changes. As such, the parametric model becomes a ‘living’ model – one that is constantly responsive to change – offering a degree of design flexibility not previously available. The same technology also allows curved surfaces to be ‘rationalized’ into flat panels, demystifying the structure and building components of highly complex geometric forms so they can be built economically and efficiently”. [8] 

Of course, communication is here understood within a very specific part of the design process, mainly connected with fabrication issues and their optimisation, but it is a concept that involves many layered levels of meaning. [9] Curiously, this shift from the physical to the immaterial reminds us of the same step made by Leon Battista Alberti, who conceived design as a purely intellectual construct and was obsessed by its transmission from idea to built form without information decay. [10] Digital innovation promises to better connect the engineering process (focus on the object) with the wider reality (the architectural perspective), enabling design teams to deal with increasingly complex sets of variables. Freedom comes, however, with the disruption of the design toolbox, usually more defined by constraints than capabilities, so that the resulting wild fluctuations of effects seem increasingly disconnected from any cause. Design choices are therefore looking for multifaceted narrative support – and the “Gherkin”, with its combination of neo-functional-sustainable storytelling and metaphorical shape, turns out to be emblematic from this point of view too. [11] 

Furthermore, extensive numerical simulations raise a question as to what extent they prove reliable, both because of their intrinsic functionality and the “black box” effect connected to the algorithmic devices. Those latter, especially in the latest applications of artificial intelligence such as neural networks, produce results through processes that remain obscure even to their designers, let alone less-aware users. Besides, the coupling of simulation with generative modelling through interactivity may not assist the designer in developing the understanding that, in several cases, (small) changes in the (coded) hypotheses can produce radically different solutions. Thus, the time spent in simulating alternatives can be more profitably spent working on different design hypotheses, and on architectural, technological and structural premises, perhaps with simpler computational models. 

Are we entering an era of computational empiricism, as some authors maintain? [12] 

Languages of innovation 

Generative modelling, morphogenesis, parametric tooling, computational and performative design… all these apparatuses have brought methodological innovation into closer integration among different disciplines, bridging the gaps between fields. Modelling the project, the main common aim of this effort, has from the beginning leaned on logics and mathematics as a shared lingua franca. [13] Since the 1960s, applied mathematics has extended its applications through the formalisation process of information technology, which has developed the tools and the models beneficial for the purposes of science and technology. Information and communication technology puts into effect “the standardisation and automation of mathematical methods (and, as such, a reversal of the relationship of domination between pure mathematics and applied mathematics and, more generally, between theory and engineering)”. [14] 

The redefinition of roles, between theories and techniques when applied to design, began in mathematics and physics with a metamorphosis of language, [15] with a shift towards symbolic languages that have gone beyond the mechanics of structures and the thermodynamics of buildings, subjecting it to automatic calculus, and finalising it in computation. [16] “Today, it is a widely held view that the advent of electronic computation has put an end to this semiempirical era of engineering technology: sophisticated mathematical models can now be constructed of some of the most complex physical phenomena and, given a sufficiently large computer budget, numerical results can be produced which are believed to give some indication of the response of the system under investigation”. [17] 

The straightforward capability to model and simulate projects, supported by the evidence of results, has given confidence in the emerging computational tools, highlighting the dualism between the desire to make these devices usable for a wide range of practitioners, in a variety of cases and contexts, and the exigency of grounding bases for deeper understanding within a reflective practice. Moreover, the very nature of using digital tools urges designers to face an increasing risk of becoming “alienated workers” who, in Marxian terms, neither own their means of production in actuality – software companies lease their products and protect them against unauthorised modifications – nor, above all, conceptually, since their complex machinery requires a specifically dedicated expertise. Therefore, within the many questions this paradigm shift is raising about the redefinition of theories and practices and their mutual relationship, a main concern regards educational content and approaches, in terms of their ability to provide useful knowledge to future practitioners and aid their impact on society. In the architectural design field – which traditionally crossbreeds arts, applied sciences, and humanities in order to fulfil a broad role of mediation between needs and desires – this means dealing with an already contradictory pedagogic landscape in which ideologically opposite approaches (namely method-oriented and case-oriented pedagogies) overlap. 

The specific of architectural design teaching does not escape this tension between methodological ambitions, nurtured by modern thinking and its quest for rationalisation, and the interplay between generations, languages and attitudes involved by learning through examples – even with its paradoxical side effects. One would expect in fact that a “positive” (according to Christopher Alexander), rule-based training should yield more open-ended outcomes than the “negative”, academic, disciplinary learning by copying. [18] But, on the one hand, the methodological approach implies an idea of linear control – towards optimisation and performance as well as in social and political terms – which reveals its origin in Enlightenment positivism. The Durandian apparatuses so widespread after World War II, with their proto-algorithmic design grammars, ended up accordingly with the reproduction of strict language genealogies. A similar trend seems to be emerging nowadays, in the convergence toward the same effective solutions in arts, sports, and whatever else, as a by-product of digital efficiency – which even the very technical camp is questioning. On the other hand, tinkering with the interpretation and application of examples makes possible the transmission of the many unspoken and unspeakable aspects connected to any learning endeavour. Getting closer to “good” examples – testing their potential according to specific situations – allows their inner quality to be grasped, reignited in different conditions, and finally transcended. Since forgetting requires something to be forgotten, Alexander is somehow right in framing this teaching attitude as “negative”: ironically, imitation provides the learning experience through which personal voices can emerge and thrive. 

Challenges ahead 

Turpin Bannister considered that in “an age of science”, architects “abandoned the scientific approach as detrimental to the pure art of design. On even the simplest question they acquiesced to their engineer and so-called experts”. [19] The pervasive penetration of computation in design would probably have met Bannister’s approval. The consequences and methodological implications are so far-reaching that they raise questions: how must education deal with the increased role of interactive computation in architectural design? And, more generally, with techno-science, its languages and methodologies? 

Architectural design still relies on a “political” attitude, and mediation between the “two cultures” [20] is a fundamental asset of its disciplinary approach. Even though the unity of knowledge has disappeared with the advent of modern science, as Alberto Pérez-Gómez stated, [21] we ideally aspire to become like renaissance polymaths, mastering state-of-the-art skills in the most disparate fields. But in the long time that separates us from Brunelleschi and Alberti, the amount of knowledge required by the different aspects of the practice, even those which are specifically architectural, has grown exponentially, and trying to get a minimum of mastery over it would demand a lifelong commitment and extraordinary personal qualities. Digital prostheses promise to close the gap between the desire for control over the many facets of the design process and the real possibility of achieving it. Some consequences of the augmented agency provided by new information and communication technologies are already evident in the overlapping occurring in the expanded field of the arts, with protagonists from different backgrounds – visual arts or cinema for instance – working as architects or curators and vice versa. [22] The power of the digital to virtually replace those “experts”, to whom, according to Turpin Bannister, architects outsource their own choices, seems to act therefore as an evolutionary agent against overspecialisation, confirming the advantage Bucky Fuller attributed to the architect as the last generalist. [23] 

However, without understanding and manipulating what happens within the black box of the algorithm, we still face the risk of being “designed” by the tools we put our trust in, going on to accept a subordinate position. Speaking machine, as John Maeda has pointed out, [24] is becoming necessary in order to contribute innovatively to any design endeavour. The well-known Asian American researcher, designer, artist and executive comes from a coding background, later supplemented with the study and practice of design and arts (along with business administration). His educational path and personal achievements indicate that such an integration of expertise is possible and desirable, even though his logical-mathematical grounding is likely the reason he mostly works with the immaterial, exploring media design and the so-called experience economy. Architectural schools are therefore facing the issue of if, when, and how to introduce coding skills into their already super-crammed syllabuses – from which, very often, visual arts, philosophy, law, storytelling and other much needed approaches and competencies are absent. One can argue that coding would provide young professionals with expertise they could immediately use in the job market, enabling them to better interact with contemporary work environments. On the other hand, a deeper perspective shows how the “resistance” of architectural specificity produced exceptional results in revolutionary times: academic education acted for the Modern masters as both a set of past, inconsistent practices to overcome and a background that enhanced the quality of their new language. 

Digitalisation looks like a further step along the process of the specialisation of knowledge, which unfolded hand-in-hand with the development of sciences, techniques, and their languages. Since the dawn of the modern age, architects have often tried to bring together a unified body of knowledge and methodology; first around descriptive geometry, and then around geometry as a specific discipline which “gives form” to mathematics, statics and mechanics. “Geometry is the means, created by ourselves, whereby we perceive the external world and express the world within us. Geometry is the foundation”, Le Corbusier writes in the very first page of his Urbanisme, trying to keep pace with modernisation and establishing a new urban planning approach according to its supposed “exactitude”. [25] But while hard sciences and their technical application rely on regularity of results in stable experimental conditions, architects are still supposed to give different answers to the same question – or, more precisely, to always reframe architectural problems, questioning them in different ways. 

Considering the volatility of the present situation, opening up and diversifying the educational offer seems a viable bet, more so than the attempt to formulate a difficult synthesis. Only by being exposed to the conflict between the selective, deterministic optimisation promise of code-wise design, and the dissipative, proliferating, unpredictable interpretation of cases can architects find their own, personal way to resolve it. 

Fig. 1 Norman Foster’s sketch for the headquarters of the Swiss Reinsurance Company, 30 St Mary Axe, in the historic core and the financial district of the City of London. Foster + Partners designed a fifty-storey tower 590ft (180 m) with a magnificent organic form that adds a distinctive identity to the skyline of the city.
Fig. 2 Norman Foster’s sketch illustrates the generative process: each floor is rotated by 5° relative to the one below around the central core with the pillars, bearing the vertical loads, the services, the stairs, and the lifts. From the core, six ‘spokes’ host the floorspace at each level. Each floorspace is detached from the next by a void triangular area about 20° wide. The vertically open areas create light wells for the height of the tower, up to the thirty-second floor. These open areas wound in coils to flow ventilation and natural lighting inside the building.
Fig. 3 The sketch of Norman Foster for the fully-glazed domed restaurant atop of the tower.
Fig. 4 The tapering profile of the tower allows reduced area at street level 160 ft (49 m), and reaches the largest diameter of 184 ft (56 m) at the 21st level, with the spatial climax at the glazed domed roof. The diagrid structure parametrises the A-shaped frames, and relieves the lateral loading from the central core. The A-shaped frames develop over two floors, and decrease the proportions from the 21st level respectively towards the pitched dome and the lobby level.

Fig. 5 Norman Foster’s sketch makes clear how the A-shaped frames take on the diagrid geometry with two diagonal columns of tubular steel 20 in (508 mm) diameter, reflecting in the diamond-shaped backgrounds of the window panes.

References

[1] S. Roudavski, “Towards Morphogenesis in Architecture”, International Journal of Architectural Computing, 3, 7 (2009) https://www.academia.edu/208933/Towards_Morphogenesis_in_Architecture (accessed 24 March 2021).  

[2] E. T. Miret, J. J. Polivka and M. Polivka, Philosophy of Structures, (Berkeley: University of California Press, 1958), 331.  

[3] P. L. Nervi, Aesthetics and Technology in Building (Cambridge, Mass.; London; Harvard University Press: Oxford University Press, 1966), 199. 

[4] T. Oden, K.-J. Bathe, “A commentary on Computational Mechanics”, Applied Mechanics Reviews, 31, 8 (1978), 1055-1056. 

[5] “We can now wonder whether any type of imaginary surface, is constructible. The answer is in the negative. So: how to choose and how to judge an imagined form?” E. T. Miret, J. J. Polivka and M. Polivka, Philosophy of Structures, (Berkeley: University of California Press, 1958) 78. 

[6] M. Majowiecki, “The Free Form Design (FFD) in Steel Structural Architecture–Aesthetic Values and Reliability”, Steel Construction: Design and Research, 1, 1 (2008), 1. 

[7] A. Menges, “Instrumental geometry”, Architectural Design, 76, 2 (2006), 46. 

[8] Foster and Partners, “Modeling the Swiss Re Tower”, ArchitectureWeek, 238 (2005), http://www.architectureweek.com/2005/0504/tools_1-1.html (accessed 10 April 2022) 

[9] “[Marjan] Colletti aptly quotes Deleuze stating: ‘The machine is always social before it is technical.’ The direct interaction between the designer and the equipment provides a feedback system of communication. He argues that the computer should ‘be regarded neither as abstract nor as machine’, but rather as an intraface.” C. Ahrens, “Digital Poetics, An Open Theory of Design-Research in Architecture”, The Journal of Architecture, 21, 2, (2016), 315; Deleuze’s passage is in G. Deleuze, C. Parnet, Dialogues (New York: Continuum International Publishing, 1987), 126-12; Colletti’s in M. Colletti, Digital Poetics, An Open Theory of Design-Research in Architecture (Farnham: Ashgate, 2013), 96. 

[10] “We shall therefore first lay down, that the whole Art of Building consists in the Design, and in the Structure. The whole Force and Rule of the Design, consists in a right and exact adapting and joining together the Lines and Angles which compose and form the Face of the Building. It is the Property and Business of the Design to appoint to the Edifice and all its Parts their proper Places, determinate Number, just Proportion and beautiful Order; so that the whole Form of the Structure be proportionable. Nor has this Design any thing that makes it in its Nature inseparable from Matter; for we see that the same Design is in a Multitude of Buildings, which have all the same Form, and are exactly alike as to the Situation of their Parts and the Disposition of their Lines and Angles; and we can in our Thought and Imagination contrive perfect Forms of Buildings entirely separate from Matter, by settling and regulating in a certain Order, the Disposition and Conjunction of the Lines and Angles.” L. B. Alberti, The Ten Books of Architecture (London: Edward Owen, 1755 [1450]), 25. 

[11] A. Zaera-Polo, “30 St. Mary Axe: Form Isn’t Facile”, Log, 4 (2005). 

[12] See – along with Oden, Bathe, and Majowiecki – Paul Humphreys, “Computational Empiricism”, Topics in the Foundation of Statistics, ed. by B. C. van Fraassen (Dordrecht: Springer, 1997) and P. Humphreys, Extending Ourselves: Computational Science, Empiricism, and Scientific Method. (New York: Oxford University Press, 2004). 

[13] C Alexander, Notes on the Synthesis of Form (Cambridge, Mass.; London: Harvard University Press, 1964). 

[14] J. Petitot, “Only Objectivity”, Casabella, 518, (1985), 36. 

[15] E Benvenuto, An Introduction to the History of Structural Mechanics (New York, N.Y.: Springer-Verlag, 1991). 

[16] M. Majowiecki, “The Free Form Design (FFD) in Steel Structural Architecture–Aesthetic Values and Reliability”, Steel Construction: Design and Research, 1, 1 (2008), 1. 

[17] T. Oden, K.-J. Bathe, “A commentary on Computational Mechanics”, Applied Mechanics Reviews, 31, 8 (1978), 1056. 

[18] “There are essentially two ways in which such education can operate, and they may be distinguished without difficulty. At one extreme we have a kind of teaching that relies on the novice’s very gradual exposure to the craft in question, on his ability to imitate by practice, on his response to sanctions, penalties, and reinforcing smiles and frowns. … The second kind of teaching tries, in some degree, to make the rules explicit. Here the novice learns much more rapidly, on the basis of general ‘principles’. The education becomes a formal one; it relies on instruction and on teachers who train their pupils, not just by pointing out mistakes, but by inculcating positive explicit rules.” C. Alexander, Notes on the Synthesis of Form (Cambridge, Mass.; London: Harvard University Press, 1964), 35. 

[19] T. C. Bannister, “The Research Heritage of the Architectural Profession”, Journal of Architectural Education, 1, 10 (1947). 

[20] C. P. Snow, The Two Cultures and the Scientific Revolution  (Cambridge University Press, 1962). 

[21] A. Pérez-Gómez, Architecture and the Crisis of Modern Science (Cambridge, Mass.: The MIT Press, 1983). 

[22] “Artists after the Internet take on a role more closely aligned to that of the interpreter, transcriber, narrator, curator, architect.” A. Vierkant, The Image Object Post-Internet, http://jstchillin.org/artie/vierkant.html (accessed 21 September 2015). The artist Olafur Eliasson, for instance, started up his own architectural office (https://studiootherspaces.net/, accessed 30 March 2021), and the film director Wes Anderson authored the interior design of the Bar Luce, inside the Fondazione Prada in Milan. 

[23] “Fuller … noted that species become extinct through overspecialization and that architects constitute the ‘last species of comprehensivists.’ The multidimensional synthesis at the heart of the field is the most invaluable asset, not just for thinking about the future of buildings but for thinking about the universe. Paradoxically, it is precisely when going beyond buildings that the figure of the architect becomes essential.” Mark Wigley, Buckminster Fuller Inc.: Architecture in the Age of Radio (Zürich: Lars Müller, 2015), 71. 

[24] J. Maeda, How to Speak Machine: Laws of Design for a Digital Age (London: Penguin Business, 2019). 

[25] Le Corbusier, The City of Tomorrow and its Planning (London: John Rocker, 1929 [1925]), 1. 

Suggest a Tag for this Article
Retrofit Project by Frederik Vandyck, Design Sciences Hub
Retrofit Project by Frederik Vandyck, Design Sciences Hub
Towards the computation of architectural liberty  
architectural liberty, automation, computation, design theory, fragmentation
Sven Verbruggen, Elien Vissers-Similon

sven.verbruggen@uantwerpen.be
Add to Issue
Read Article: 2620 Words

A design process consists of a conventionalised practice – a process of (personal) habits that have proven to be successful – combined with a quest for creative and innovative actions. As tasks within the field of architecture and urban design become more complex, professionals tend to specialise in one of many subsets, such as designing, modelling, engineering, managing, construction, etc. They use digital tools which are developed for these specialised tasks only. Therefore, paradoxically, automation and new algorithms in architecture and urbanism are primarily oriented to simplify tasks within subsets, rather than engaging with the complex challenges the field is facing. This fragmented landscape of digital technologies, together with the lack of proper data, hinders professionals’ and developers’ ability to investigate the full digital potential for architecture and urban design. [1] Today, while designers explore the aid that digital technologies can provide, it is mostly the conventionalised part of practice that is being automated to achieve a more efficient workflow. This position statement argues for a different approach: to overcome fragmentation and discuss the preconditions for truly coping with complexity in design – which is not a visual complexity, nor a complexity of form, but rather a complexity of intentions, performance and engagement, constituted in a large set of parameters. We will substantiate our statement with experience in practice, reflecting on the Retrofit Project: our goal to develop a smart tool that supports the design of energy neutral districts. [2]  

So, can designers break free from the established fragmentation and compute more than technical rationale, regulations and socio-economic constraints? Can they also incorporate intentions of aesthetics, representation, culture and critical intelligence into an architectural algorithm? To do so, the focus of digital tools should shift from efficiency to good architecture. And to compute good architecture, there is a need to codify a designer’s evaluation system: a prescriptive method to navigate a design process by giving value to every design decision. This evaluation system ought to incorporate architectural liberty – and therein lies the biggest challenge: differentiating between where to apply conventionalised design decisions and where (and how) to be creative or inventive. Within a 5000-year-old profession, the permitted liberty for these creative acts has been defined elastically: while some treatises allow only a minimum of liberty for a designing architect, others will lean towards a maximum form of liberty to guarantee good architecture. [3]  

A minor group of early adopters, such as Greg Lynn, Zaha Hadid Architects, and UN Studio, tried to tackle the field’s complexity using upcoming digital technologies, in the late ’90s early 2000s. They conveniently inferred their new style or signature architecture from these computational techniques. This inference, however, causes an instant divide between existing design currents and these avant-garde styles. The latter claim the notion of complexity – the justification for their computational techniques – lies mostly within the subset of form-giving, not covering the complexity of the field. This stylistic path is visible in, for example, Zaha Hadid Architects’ 2006 masterplan for Kartal-Pendik in Istanbul. The design thrives on binary decisions in 3D-modelling tool Maya, where it plays out a maximum of two parameters at once: the building block with inner court and the tower. The resulting plastic urban mesh looks novel and stylistically intriguing, yet produces no real urbanity and contains no intelligence on the level of the building type. This methodology does not generate knowledge on how well the proposed urban quarter (or constituent buildings) will perform on the level of, for example, costs, energy production and consumption, infrastructure, city utilities, diversity and health. The fluid mass still needs all conventional design operations to effectively turn it into a mixture of types, urban functions, and local identity. Arguably, the early adopters’ stylistic path avoided dealing with real complexity and remained close to simple automation. In doing so, while they promoted a digital turn, they might also have dug the foundations for today’s fragmentation in the field.  

Ironically, to some extent Schumacher’s treatise – definitely the parts that promote parametricism as a style – reads as a cover-up of the shortcomings of parametric software; for example, the inability to produce local diversity and typological characteristics beyond formal plasticity. [4] Schumacher further rejects Vitruvius to prevent structural rationale from taking primacy, and he disavows composition, harmony and proportion as outdated variable communication structures to propose the “fluid space” as the new norm. [5] This only makes sense knowing that the alternative – a higher intelligence across the whole field of architecture and urban planning, such as codified data and machine learning algorithms – did not yet exist for the early adopters. Contemporary applications such as Delve or Hypar do make use of those intelligent algorithms, yet prioritise technical and economical parameters (e.g. daylight, density, costs) to market efficiency. [6]  

Any endeavour to overcome the established fragmentation and simplified automation will ultimately find itself struggling with the question of what good architecture is. After all, even with large computational power at hand, the question remains: how to evaluate design decisions beyond the merely personal or functional, in a time where no unified design theory exists? In fact, the fragmented specialisation of today’s professionals has popularised the proclamation of efficiency. As a result, an efficiency driver (whether geared by controlling costs, management or resources) is often disguised as moral behaviour, as if its interest is good architecture first, and the profit and needs of beneficiaries only come second. If the added value of good architecture cannot be defined, the efficiency driver will continue to get the upper hand, eroding the architectural profession to an engineering and construction service providing calculations, permits and execution drawings.  

It was inspiring to encounter Alessandro Bava’s Computational Tendencies on this matter:  

The definition of what constitutes “good” architecture is, in fact, always at the center of architecture discourse, despite never finding a definite answer. Discourses around digital architecture have too often resolved the question of the “good” in architecture by escaping into the realm of taste or artistic judgment. [7] 

Bava renders Serlio’s architectural treatise as an original evaluating system that attributes universal value, and revisits Rossi’s exalted rationalism to propose a merger of architecture’s scientific qualities with its artistic qualities. He aims to re-establish architecture’s habitat-forming abilities and prevent architecture from becoming an amalgam of reduced and fragmented services. However, Serlio’s treatise did not provide a fully codified and closed formal system, as it still includes the liberty of the architect. [8] Going through Serlio’s On Domestic Architecture, an emphasis is placed on ideal building types, mostly without context. Therefore, no consideration is given to how these types ought to be modified when they need to be fitted in less ideal configurations such as non-orthogonal grids. The books also remain ignorant of the exceptions: the corner-piece-type, or fitting-parts that mediate between buildings and squares on a higher level. This is not a cheap critique of Serlio’s work. It is an awareness one needs to have when revisiting Serlio’s work as a “proto-BIM system, one whose core values are not market availability or construction efficiency, but harmonic proportions”. [9] Arguably, it is the liberty, the modifications, and the exceptions that need to be codified, to reach beyond simplified automation, across fragmentation, and towards an architectural algorithm to assist designers. 

This is easier said than done, otherwise the market would be flooded with design technologies by now. As with most design problems, the only way to solve them is by tackling them in practice. In 2021, the Design Sciences Hub, affiliated with the University of Antwerp, set up the Retrofit Project. The aim is to develop an application to test the feasibility of district developments. The solution will show an urban plan with an automatically generated function mix and optimized energetic and ecological footprint, for any given site and context. The project team collaborates with machine-learning experts and environmental engineers for the necessary interdisciplinary execution. Retrofit is currently in the proof-of-concept phase, which focuses on energy neutrality and will tackle urban health and carbon neutrality in the long run. 

The problem of modifications and exceptions seems the easiest to examine, as it primarily translates into a challenge of computational power and coping with a multitude of parameters. However, these algorithms should be smart enough to select a specific range within the necessary modifications and exceptions to comply with the design task at hand. In this case, the algorithm should select the correct modifications and exceptions needed to integrate certain types into any given site within the Retrofit application. In other words, there is a need for an intelligent algorithm that can be fed a large number of types as input data to generate entirely new or appropriate building types. The catch resides within the word “intelligent”, as algorithms aren’t created intelligent, they are trained to reach a certain level of intelligence based on (1) codifiable theory and (2) relevant training sets of data. Inquiring into a variety of evaluation systems for architectural design that emerged over the last 40 years, Verbruggen revealed the impossibility of creating a closed theoretical framework, and uniquely relating this framework to a conventionalised evaluation system in practice. [10] As such, both the codifiable theory – a unified evaluation system that integrates scientific and artistic qualities into one set of rules – and the training set  hardly exist in architecture and urban design. To complicate matters even more, today’s non-unification is itself often embraced as the precondition for good architecture [11-15] 

And so, the liberty question emerges here once again: how can different types, their modifications and exceptions, including respective relationships with different contexts, be codified? It is easy to talk about codification, but much harder to implement it within a project. When different types are inserted into a database, how are the attributes defined? This is a task that proved to be very laborious and raised many new questions in the Retrofit project. Attributes will include shape and size, yet might also include levels of privacy, preferred material usage, degree of openness, average energetic performance, historic and social acceptance in specific areas, compatibility with different functions, etc. Which values define when and where a specific type is appropriate, and how are they weighed? Do architects alone fill up the database, and if so, which architect is qualified, and why? And when an AI application would examine existing typologies within our built environment, which of these examples should be considered good, and why? Can big data or IoT sensors help in data gathering? To truly take everything into account, how much data do we really need (e.g., a structure’s age and condition, social importance, usage, materials, history, etc.). Furthermore, when the Retrofit application runs on an artificially intelligent algorithm that is trained to think beyond the capabilities of a single architect, will the results diverge (too) much from what society is used to? 

The many practical questions from the Retrofit Project show that defining the architect’s liberty is both the problem and holds the potential for digital technologies to tackle the true complexity of the field. Liberty is undeniably linked to the design process and, therefore, encoding a design process needs to (1) capture the architect’s evaluation system and (2) allow for targeted and smart data gathering. The evaluation system can then be coded into an algorithm, with the help of machine learning experts, and trained using the gathered data. Both the evaluation system and the necessary data rely heavily on the architect’s liberty. Because dealing with these liberties is a difficult task – perhaps the most difficult task in the age of digital architecture – many contemporary businesses and start-ups that claim to revolutionise the design process with innovative technologies might not revolutionise anything, because they opt for the easy route and avoid dealing with the liberty aspect. An architectural algorithm that does take the liberty aspect into account may provide designers with an artificial assistant to help tackle all complexities in the field while tapping into the full potential of today’s available computational power. 

This could be the ultimate task we set ourselves at the DSH. Studying a large dataset of design processes, steps, and creative acts might reveal codifiable patterns that could be integrated into a unified and conventionalized evaluation system. This study would target large and diverse groups of designers and users in general, including their knowledge exchange with other involved professionals. Could such an integral evaluation system, combined with data gathering, finally offer the prospect of developing a truly architectural algorithm? Eventually, this too will encounter issues that require further study, such as deciding who to involve and how to wisely navigate between the highs and lows of the wisdom of crowds: [16] can we still trust the emerging patterns detected by machine learning algorithms to constitute proper architectural liberty and, thus, good architecture? We will proceed vigilantly, but we must explore this path to avoid further fragmentation, non-crucial automation, and the propagation of false complexity. 

References

[1] N. Leach, Architecture in the Age of Artificial Intelligence: An Introduction for Architects (London; New York: Bloomsbury Visual Arts, 2021).

[2] The Design Sciences Hub [DSH] is a valorisation team of the Antwerp Valorisation Office. The DSH works closely with IDLab Antwerp for Machine Learning components and with the UAntwerp research group Energy and Materials in Infrastructure and Buildings [EMIB] to study energy neutrality within the Retrofit Project. Although the project will be led and executed by the University of Antwerp, the private industry is involved as well. Four real estate partners – Bopro, Immogra, Quares and Vooruitzicht – are financing and steering this project. So is the Beacon, maximizing the insights from digital technology companies. Also see: https://www.uantwerpen.be/en/projects/project-design-sciences-hub/projects/retrofit/

[3] H.W. Kruft, A History of Architectural Theory: from Vitruvius to the present (London; New York: Zwemmer Princeton Architectural Press, 1994).

[4] P. Schumacher, The Autopoiesis of Architecture: A New Framework for Architecture. Vol. 1 (Chichester: John Wiley & Sons Ltd, 2011). P. Schumacher, The Autopoiesis of Architecture: A New Agenda for Architecture. Vol. 2 (Chichester: John Wiley & Sons Ltd, 2012).

[5] Ibid.

[6] Delve is a product of Sidewalk Labs, founded as Google’s urban innovation lab, becoming an Alphabet company in 2016. Hypar is a building generator application started by former Autodesk and Happold engineer Ian Keough. Also see www.hypar.io, www.sidewalklabs.com/delve.

[7] A. Bava, “Computational Tendencies”, In N. Axel, T. Geisler, N. Hirsch, & A. L. Rezende (Eds.), Exhibition catalogue of the 26th Biennial of Design Ljubljana. Slovenia (2020): e-flux Architecture and BIO26| Common Knowledge.

[8] H.W. Kruft, A History of Architectural Theory: from Vitruvius to the present (London; New York: Zwemmer Princeton Architectural Press, 1994).

[9] A. Bava, “Computational Tendencies”, In N. Axel, T. Geisler, N. Hirsch, & A. L. Rezende (Eds.), Exhibition catalogue of the 26th Biennial of Design Ljubljana. Slovenia (2020): e-flux Architecture and BIO26| Common Knowledge.

[10] S. Verbruggen, The Critical Residue: Creativity and Order in Architectural Design Theories 1972-2012 (2017).

[11] M. Gausa & S. Cros, Operative optimism (Barcelona: Actar, 2005)

[12] W. S. Saunders, The new architectural pragmatism: a Harvard design magazine reader. (Minneapolis: University of Minnesota Press, 2007).

[13] R. Somol & S. Whiting, Notes around the Doppler Effect and Other Moods of Modernism. (2002) In K. Sykes (Ed.), Constructing a New Agenda: Architectural Theory 1993-2009 (1st ed., pp. 188-203). (New York: Princeton Architectural Press).

[14] K. Sykes, Constructing a new agenda : architectural theory 1993-2009.  (1st ed., New York: Princeton Architectural Press, 2010).

[15] S. Whiting, (recorded in Delft, march 2006). The Projective, Judgment and Legibility: Lecture at the Projective Landscape Conference, organized by the TU Delft and the Stylos foundation.

[16] P. Mavrodiev & F. Schweitzer “Enhanced or distorted wisdom of crowds? An agent-based model of opinion formation under social influence”, Swarm Intelligence, 15(1-2), 31-46. doi:10.1007/s11721-021-00189-3 J. Surowiecki, The wisdom of crowds : why the many are smarter than the few. (London: Abacus, 2005).

Suggest a Tag for this Article
Open Seminar – Round Table
Open Seminar – Round Table
B–Pro Open Seminar: The Algorithmic Form
05/05/2022
Provides Ng

provides.ng.19@ucl.ac.uk
Add to Issue
Read Article: 573 Words

08 December 2021, 2:00 pm–4:00 pm

This Open Seminar curated by Alessandro Bava focused on specific authors and practices that can help corroborate an hermeneutic framework to discuss the form of algorithms.

Link to recording

Solarpunk Building for Terraforma, Alessandro Bava, 2021
Solarpunk Building for Terraforma, Alessandro Bava, 2021

The consensus on the use of algorithms in architecture revolves around the false premise that the discipline is on a linear evolutionary path from (geometrical) simplicity to complexity.  Even research on digital fabrication attaches its quest for innovation on this vector. Comparatively 20th century developments in construction materials and techniques were driven by affordability and scalability, with the aim of increasing the living standards of the urban dweller; resulting in new compositional, formal and tectonic approaches.

Granted that new methodologies should correspond to new technologies, these should still be measured on their contribution to architecture, rather than on the extent to which they fulfill the potential of said technology as they are imported from other domains. Hermeneutic frameworks should not be found outside the disciplinary boundaries of architecture, risking a deep (and unproductive) schism between new technologies and the established ethos, knowledge and methodologies of the architect.

The essay ‘Computational Tendencies’ (Bava, 2020) attempted to weave a continuity between new technologies and the legacy of rationalism as elaborated by the loose group of Tendenza operating in Europe from the 1960s, guided by the idea that the foundations of computer science evolved out of architectural theory starting from the Renaissance, most prominently in regards to calculus and data science. This intuition is supported by the work of Roberto Bottazzi who identified the theoretical roots of current architectural software precisely in this lineage. This continuity is necessary if we are to understand the significant impact of algorithms on how we think about an architectural project.

In fact it remains to be established how we can interpret the new tools offered by algorithms and digital technology at large within the existing methodological framework of architecture. To put it simply, how are algorithms useful to architectural design? How do they expand the compositional, organisational and formal repertoire that constitute the main objective of architecture as a habitat-making discipline?

This Open Seminar will focus on specific authors and practices that can help corroborate an hermeneutic framework to discuss the form of algorithms, such as: Roberto Bottazzi (The Bartlett), Francesca Gagliardi and Federico Rossi (Fondamenta), Philippe Morel (ENSA Paris-Malaquais & The Bartlett), Provides Ng (The Bartlett) and Marco Vanucci (Open Systems). 20 minutes presentations by the speakers will follow by a final roundtable discussion. The Open Seminar will support the development of articles for Issue 02 of Prospectives in Spring 2022.

About the Curator 

Alessandro Bava is an architect based in Milan. Currently he runs BB a collaborative architectural practice with Fabrizio Ballabio. After graduating from the Architectural Association he was part of åyr a collective making research on the sharing economy and domesticity and was the editor of Ecocore, an ecology magazine. He taught a research cluster on computational architecture within The Bartlett B-Pro programmes in the academic year 2020/21 and previously a history and theory course in the same programme. His work has been exhibited at the Venice Biennale, the Berlin Biennale, the Stedelijk Museum in Amsterdam, the Ludwig Museum in Cologne, Fondation Cartier in Paris, Moderna Museet in Stockholm and recently at the Quadriennale in Rome.

link to more information

Suggest a Tag for this Article
Bartlett B-Pro, RC1, Gaming Consensus, 2021
Bartlett B-Pro, RC1, Gaming Consensus, 2021
Open Call for Submissions to Prospectives: Issue 03
06/05/2022
Provides Ng

provides.ng.19@ucl.ac.uk
Add to Issue
Read Article: 297 Words

The academic journal, published by the B-Pro team at The Bartlett School of Architecture, is calling for papers for its third issue.

Bartlett B-Pro, RC1, Gaming Consensus, 2021
Bartlett B-Pro, RC1, Gaming Consensus, 2021

Prospectives Journal is an open access journal published by B-Pro at The Bartlett School of Architecture. Launched in November 2020, the team are now compiling a ground-breaking third issue, Climate F(r)ictions, curated by Déborah López and Hadin Charbel.  
 
This third issue will explore the implications of technology on humans and ecologies in the era of climate catastrophes and increasing instability. It will ask: can technology be designed and utilised without falling into territorialising tropes? Can AI be used to challenge current production-based economies? How can existing power structures be subverted? What decisions would nature make if it could govern itself? What kinds of technologies, protocols and policies can afford such autonomy? How would this affect architectural production, design and habitation at individual, urban and larger ecological scales?  
 
The editorial team are inviting submissions from designers, researchers and thinkers. Submissions can focus on either a short positional (1500-2000 words) or research papers (3000-5000 words). Submissions will be peer reviewed by Prospectives Journal’s Advisory Board. 

Deadline  

The deadline for papers is 01 June 2022, and the new issue of Prospectives will be published on 29 July.  

Submission

To submit papers, please fill in the online application form and email your submission to bartlettprospectivesjournal@ucl.ac.uk with the subject line: “Open Call Submission Prospectives Issue 3 SURNAME_POSITION or RESEARCH PAPER_paper title”. Files should be named in the same manner. All files should be supplied as Word documents. Kindly refer to this writing style guide for referencing and other formating.

Link to more information

Suggest a Tag for this Article
Subscribe To Prospectives To Automatically Receive Curated Issues By Our Advisory Board Twice A Year!

£30