A loyal companion to the breakthroughs of artificial intelligence is the fear of losing jobs due to a robotic takeover of the labour market. Mary L. Gray and Siddharth Suri’s research on ghost work unveiled another possible future, where a “last mile” requiring human intervention would always exist in the journey towards automation.  The so-called “paradox of the last mile” has been exerting impacts on the human labour market across the industrial age, recurringly re-organising itself when absorbing marginalised groups into its territory. These groups range from child labourers in factories, to the “human computer” women of NASA, to on-demand workers from Amazon Mechanical Turk (MTurk).  Yet their strenuous efforts are often rendered invisible behind the ostensibly neutral algorithmic form of the automation process, creating “ghost work”. 
Based on this concept of “the last mile”, this study intends to excavate how its paradox has influenced architectural authorship, especially during architecture’s encounters with digital revolutions. I will firstly contextualise “architectural authorship” and “the last mile” in previous studies. Then I will discuss the (dis)entanglements between “automation” and “digitalisation”. Following Antoine Picon and Nicholas Negroponte, I distinguish between the pre-information age, information age and post-information age before locating my arguments according to these three periods. Accordingly, I will study how Leon Battista Alberti, the Fun Palace, and mass-customised houses fail in the last mile of architectural digitalisation and how these failures affect architectural authorship. From these case studies, I challenge the dominant narrative of architectural authorship, either as divinity or total dissolution. In the end, I contend that it is imperative to conceive architectural authorship as relational and call for the involvement of multi-faceted agents in this post-information age.
Architectural Authorship in the Digital Age
The emergence of architects’ authorial status can be dated back to Alberti’s De re aedificatoria, which states that “the author’s original intentions” should be sustained throughout construction.  Yet at the same time, those architects should keep a distance from the construction process.  It not only marks the shift from the artisanal authorship of craftsmen to the intellectual authorship of architects but also begets the divide between the authorship of architectural designs and architectural end products.  However, this tradition can be problematic in the digital age, when multi-layered authorship becomes feasible with the advent of mass-collaboration software and digital customisation technologies. 
Based on this, Antoine Picon has argued that, despite attempts to include various actors by collaborative platforms such as BIM, architects have entered the Darwinian world of competition with engineers, constructors and existing monopolies, to maintain their prerogative authorship over the profession.  These challenges have brought about a shifting attention in the profession, from authorship as architects to ownership as entrepreneurs.  Yuan and Wang, on the other hand, call for a reconciliation of architectural authorship between regional traditions and technologies from a pragmatic perspective.  However, these accounts did not throw off the fetters of positioning architects as the centre of analysis. In the following article, I will introduce “the last mile”, a theory from the field of automation, to provide another perspective on the issues of architectural authorship.
“The Last Mile” as Method
The meaning of “the last mile” has changed several times throughout history. Metaphorically, it was used to indicate the distance between the status quo and the goal
, in various fields, such as movies, legal negotiations, and presidential campaigns.  It was first introduced in the technology industry as “the last mile” of telecommunication, on which one of the earliest traceable records was written in the late 1980s.  Afterwards, “the last mile” of logistics began to be widely used in the early 2000s, following the dot-com boom of the late 90s that fuelled discussions of B2C eCommerce.  However, in this article, I will use “the last mile” of automation, a concept from the recent “AI revolution” since 2010, to reconsider architectural authorship.  In this context, “the last mile” of automation refers to “the gap between what a person can do and what a computer can do”, as Gray and Suri defined in their book. 
I employ this theory to discuss architectural authorship for two purposes.
1. Understanding the paradox of automation can be of assistance in understanding how architectural authorship changes along with technological advancements. Pasquinelli and Joler suggest that “automation is a myth”, because machines have never entirely operated by themselves without human assistance, and might never do so.  Subsequently, here rises the paradox that “the desire to eliminate human labour always generates new tasks for humans” and this shortcoming “stretched across the industrial era”.  Despite being confined within the architectural profession, architectural authorship is subject to change in parallel with the alterations of labour tasks.
2. I contend that changes in denotations of “the last mile” signal turning points in both digital and architectural history. As Figure 1 suggests, in digital history, the implication of the last mile has changed from the transmission of data to the analysis of data, and then to automation based on data. The former change was in step with the arrival of the small-data environment in the 1990s and the latter corresponds with the leap towards the big-data environment around 2010.  In a similar fashion, after the increasing availability of personal computers after the 90s, the digital spline in architecture found formal expression and from around 2010 onwards, spirits of interactivity and mass-collaboration began to take their root in the design profession.  Therefore, revisiting the digital history of architecture from the angle of “the last mile” can not only provide alternative readings of architectural authorship in the past but can also be indicative of how the future might be influenced.
Between Automation and Digitalisation
Before elucidating how architectural authorship was changed by the arrival of the automated/digital age, it is imperative to distinguish two concepts mentioned in the previous section – automation and digitalisation. To begin with, although automation first came to use in the automotive industry in 1936 to describe “the automatic handling of parts”, what this phrase alludes to has long been rooted in history.  As Ekbia and Nardi define, automation essentially relates to labour-saving mechanisms that reduce the human burden by transferring it to machines in labour-requiring tasks, including both manual and cognitive tasks.  Despite its use in human history, it was not until the emergence of digital computers after WWII that its meaning became widely applicable.  The notion of computerised automation was put forward by computer scientist Michael Dertouzos in 1979, highlighting its potential for tailoring products on demand.  With respect to cognitive tasks, artificial intelligence that mimics human thinking is employed to tackle functions concerning “data processing, decision making, and organizational management”. 
Digitalisation, on the other hand, is a more recent concept engendered by the society of information in the late 19th century, according to Antoine Picon.  This period was later referred to as the Second Industrial Revolution, when mass-production was made possible by a series of innovations, including electrical power, automobiles, and the internal combustion engine. It triggered what Beniger called the “control revolution” – the volume of data exploded to the degree that it begot revolutions in information technology.  Crucial to this revolution was the invention of digital computing, which brought about a paradigm shift in the information society.  It has changed “the DNA of information” in the sense that, as Nicholas Negroponte suggests, “all media has become digital”, by converting information from atoms to bits.  In this sense, Negroponte distinguishes between the information age, which is based on economics of scale, and the post-information age, founded on personalisation. 
It can be observed that automation and digitalisation are intertwined in multiple ways. Firstly, had there been no advancement in automation during the Second Industrial Revolution, there would be no need to develop information technology, as data would have remained at a manageable level. Secondly, the advent of digital computers has further intermingled these two concepts to the extent that, in numerous cases, for something to be automated, it needs first to be digitalised, and vice versa. In the architectural field alone, examples of this can be found in cybernetics in architecture and planning, digital fabrication, smart materials, and so on. Hence, although these two terms are fundamentally different – most obviously, automation is affiliated with the process of input and output, and digitalisation relates to information media – the following analysis serves with no intention to differentiate between the two. Instead, I discuss “the last mile” in the context of reciprocity between these two concepts. After all, architecture itself is at the convergence point between material objects and media technologies. 
Leon Battista Alberti: Before the Information Age
Digitalisation efforts made by architects, however, appeared to come earlier than such attempts made in industrial settings of the late 19th century. This spirit can be traced back to Alberti’s insistence on identicality during information transmission, by compressing two-dimensional and three-dimensional information into digits – which is exemplified by Descriptio Urbis Romae and De statua.  In terms of architecture, as mentioned previously, he positions built architecture as an exact copy of architects’ intention.  This stance might be influenced by his views on painting. First, he maintains that all arts, including architecture, are subordinate to paintings, where “the architraves, the capitals, the bases, the columns, the pediments, and all other similar ornaments” came from.  Second, in his accounts, “the point is a sign” that can be seen by eyes, the line is joined by points, and the surface by lines.  As a result, the link between signs and architecture is established through paintings since architecture is derived from paintings and paintings from points/signs.
Furthermore, architecture can also be built according to the given signs. In Alberti’s words, “the whole art of buildings consists in the design (lineamenti), and in the structure”, and by lineamenti, he means the ability of architects to find “proper places, determinate numbers, just proportion and beautiful order” for their constructions.  It can be assumed that, if buildings are to be identical to their design, then, to begin with, there must be “determinate numbers” to convey architects’ visions by digital means – such as De statua (Fig. 2). Also, in translating the design into buildings, these numbers and proportions should be unbothered by any distortions as they are placed in actual places – places studied and measured by digital means, just like Descriptio Urbis Romae (Fig. 2).
Although the Albertian design process reflects the spirit of the mechanical age, insisting on the identicality of production, it can be argued that his pursuit of precise copying was also influenced by his pre-modern digital inventions being used to manage data.  Therefore, what signs/points mean to architecture for Alberti can be compared to what bits mean to information for Negroponte, as the latter is composed of the former and can be retrieved from the former. Ideally, this translation process can be achieved by means of digitalisation.
Yet it is obvious that the last mile for Alberti is vastly longer than that for Negroponte. As Giorgio Vasari noted in the case of Servite Church of the Annunziata, while Alberti’s drawings and models were employed for the construction of the rotunda, the result turned out to be unsatisfactory, and the arches of nine chapels are falling backwards from the tribune due to construction difficulties.  Also, in the loggia of the Via della Vigna Nuova, his initial plan to build semi-circular vaults was aborted because of the inability to fulfil this shape on-site.  These two cases suggest that the allographic design process – employing precise measurements and construction – which heralded the modern digital modelling software and 3D-printing technologies, was deeply problematic in Alberti’s time.
This problem was recognised by Alberti himself in his De re aedificatoria, when he wrote that to be “a wise man”, one cannot stop in the middle or at the end of one’s work and say, “I wish that were otherwise”.  In Alberti’s opinion, this problem can be offset by making “real models of wood and other substances”, as well as by following his instruction to “examine and compute the particulars and sum of your future expense, the size, height, thickness, number”, and so on.  While models can be completed without being exactly precise, architectural drawings should achieve the exactness measured “by the real compartments founded upon reason”.  According to these descriptions, the design process conceived by Alberti can be summarised as Figure 3.
If, as previously discussed, architecture and its context can be viewed as an assembly of points and signs, the Albertian design process can be compared to how these data are collected, analysed and judged until the process reaches the “good to print” point – the point when architects exit and construction begins. Nonetheless, what Vasari has unveiled is that the collection, analysis and execution of data can fail due to technological constraints, and this failure impedes architects from making a sensible judgement. Here, the so-called “technological constraints” are what I consider to be “the last mile” that can be found across the Albertian design process. As Vasari added, many of these technological limitations at that time were surmounted with the assistance of Salvestro Fancelli, who realised Alberti’s models and drawings, and a Florentine named Luca, who was responsible for the construction process.  Regardless of these efforts, Alberti remarked that only people involved in intellectual activities – especially mathematics and paintings – are architects; the opposite of craftsmen.  Subsequently, the challenges of confronting “the last mile” are removed from architects’ responsibilities through this ostensibly neutral design process, narrowing the scope of who is eligible to be called an architect. The marginalisation of artisanal activities, either those of model makers, draughtsmen or craftsmen, is consistent with attributing the laborious last mile of data collection, analysis and execution – measuring, model making, constructing – exclusively to their domain.
While the division of labour is necessary for architecture, as John Ruskin argued, it would be “degraded and dishonourable” if manual work were less valued than intellectual work.  For this reason, Ruskin praised Gothic architecture with respect to the freedom granted to craftsmen to execute their own talents.  Such freedom, however, can be expected if the last mile is narrowed to the extent that, through digitalisation/automation, people can be at the same time both architects and craftsmen. Or can it?
Fun Palace: At the Turn of the Information and Post-Information Age
Whilst the Albertian allographic mode of designing architecture has exerted a profound impact on architectural discipline due to subsequent changes to the ways architects have been trained, from the site to the academy, this ambition of separating design from buildings was not fulfilled, or even agreed upon among architects, in the second half of the 20th century.  Besides, the information age on the basis of scale had limited influences on architectural history, except for bringing about a new functional area – the control room.  Architecture’s initial encounters with the digital revolution after Alberti’s pre-modern technologies can be traced back to the 1960s, when architects envisaged futuristic cybernetic-oriented environments.  Different from Alberti’s emphasis on the identicality of information – the information per se – this time, the digitalisation and information in architecture convey a rather different message.
Gorden Pask defined cybernetics as “the field concerned with information flows in all media, including biological, mechanical, and even cosmological systems”.  By emphasising the flow of data – rather than the information per se – cybernetics distinguishes itself in two aspects. Firstly, it is characterised by attempts of reterritorialization – it breaks down the boundaries between biological organisms and machines, between observers and systems, and between observers, systems and their environments, during its different development phases – which are categorised respectively as first-order cybernetics (1943-1960), second-order cybernetics (1960-1985) and third-order cybernetics (1985-1996). 
Secondly, while data and information became secondary to their flow, catalysed by technologies and mixed realities, cybernetics is also typified by the construction of frameworks.  The so-called framework was initially perceived as a classifying system for all machines, and later, after computers were made more widely available and powerful, it began to be recognised as the computational process.  This thinking also leads to Stephen Wolfram’s assertion that the physical reality of the whole universe is generated by the computational process and is itself a computational process.  This is where the fundamental difference is between the Albertian paradigm and cybernetics, as the former is based on mathematical equations and the latter attempts to understand the world as a framework/computation.  Briefly, in cybernetics theory, information per se is subordinate to the flow of information and this flow can again be subsumed into the framework, which is later known as computational processes (Fig. 4).
In Cedric Price’s Fun Palace, this hierarchical order resulted in what Isozaki described as “erasing architecture into system” after its partial completion (Fig. 5).  Such an erasure of architecture was rooted in the conceptual process, since the cybernetics expert in charge of the Fun Palace was Gordon Pask, who founded his theory and practice on second-order cybernetics.  Especially so, as considering that one major feature of second-order cybernetics is what Maturana and Varela termed “allopoiesis” – a process of producing something other than the system’s original component – it is understandable that if the system is architecture, then it would generate something different than architecture.  In the case of the Fun Palace, it was presupposed that architecture is capable of generating social activities, and that architects can become social controllers.  More importantly, Cedric Price rejected all that is “designed” and instead only made sketches of indistinct elements, diagrams of forces, and functional programs, rather than architectural details.  All these ideas, highlighting the potential in regarding architecture as the framework of computing – in contrast to seeing architecture as information – rendered the system more pronounced and set architecture aside.
By rejecting architecture as pre-designed, Price and Littlewood strived to problematize the conventional paradigm of architectural authorship. They highlighted that the first and foremost quality of the space should be its informality, and that “with informality goes flexibility”.  This envisages user participation by rebuking fixed interventions by architects such as permanent structures or anchored teak benches.  In this regard, flexibility is no longer positioned as a trait of buildings but that of use, by encouraging users to appropriate the space.  As a result, it delineates a scenario of “the death of the author” in which buildings are no longer viewed as objects by architects, but as bodily experiences by users – architectural authorship is shared between architects and users. 
However, it would be questionable to claim the anonymity of architectural authorship – anonymous in the sense of “the death of the author” – based on an insignificant traditional architectural presence in this project, as Isozaki did.  To begin with, Isozaki himself has remarked that in its initial design, the Fun Palace would have been “bulky”, “heavy”, and “lacking in freedom”, indicating the deficiency of transportation and construction technologies at that time.  Apart from the last mile to construction, as Reyner Banham explained, if the Fun Palace’s vision of mass-participation is to be accomplished, three premises must be set – skilful technicians, computer technologies that ensure interactive experiences and programmable operations, and a secure source of electricity connecting to the state grid.  While the last two concerns are related to technological and infrastructural constraints, the need for technicians suggests that, despite its claim, this project is not a fully automated one. The necessary involvement of human factors to assist this supposedly automated machine can be further confirmed in Price and Littlewood’s accounts that “the movement of staff, piped services and escape routes” would be contained within “stanchions of the superstructure”.  Consequently, if architects can extend their authorship by translating elements of indeterminacy into architectural flexibility, and users can be involved by experiencing and appropriating the space, it would be problematic to leave the authorship of these technicians unacknowledged and confine them within service pipes. 
The authorship of the Fun Palace is further complicated when the content of its program is scrutinized. Price and Littlewood envisaged that people’s activities would feed into the system, and that decisions would be made according to this information.  During this feed-in and feedback process, human activities would be quantified and registered in a flow chart (Fig. 6).  However, the hand-written proposed list of activities in Figure 6 shows that human engagement is inseparable from the ostensibly automated flow chart. The arrows and lines mask human labours that are essential for observing, recognising, and classifying human activities. These tasks are the last mile of machine learning, which still requires heavy human participation even in the early 21st century.
For instance, when, in 2007, the artificial intelligence project ImageNet was developed to recognise and identify the main object in pictures, developers found it impossible to increase the system’s accuracy by developing AI alone (and only assisting it when it failed).  Finally, they improved the accuracy of ImageNet’s algorithms by finding a “gold standard” of labelling the object – not from the developments of AI itself, but by using 49,000 on-demand workers from the online outsourcing platform MTurk to perform the labelling process.  This example suggests that if the automation promised by the Fun Palace is to be achieved, it is likely to require more than just the involvement of architects, users, and technicians. In the time of the Fun Palace’s original conception, the attempt was not fulfilled due to the impotence of computing technologies. Yet if such an attempt was to be made in the 2020s, it is likely that architectural authorship would be shared among architects, users, technicians, and ghost workers from platforms such as MTurk.
Returning to the topic of cybernetics, whilst cybernetic theories tend to redefine territories of the architectural system by including what was previously the other parts of the system – machines, observers, adaptive environments – the example of the Fun Palace has shown that this process of blurring boundaries would not be possible without human assistance, at least initially. The flow of information between these spheres would require human interventions to make this process feasible and comprehensible because, in essence, “the information source of machine learning (whatever its name: input data, training data or just data) is always a representation of human skills, activities and behaviours, social production at large”. 
Houses of Mass-Customisation: In the Post-information Age
Although cybernetics theories have metaphorically or practically influenced architectural discourse in multiple ways, from Metabolism and Archigram to Negroponte and Cedric Price, such impact was diminished after the 1970s, in parallel with the near-total banishment of cybernetics as an independent discipline in the in the academia.  After a long hibernation during “the winter of artificial intelligence”, architecture’s next encounter with digital revolutions happened in the 1990s.  It was triggered by the increasing popularity and affordability of personal computers – contrary to the expectations of cybernetics engineers, who back in the 1960s dreamt that computers would increase both in power and size.  These distinctive material conditions led to the underlying difference between the second-order cybernetics in the 1960s and architecture’s first digital turn in the 1990s. I contend that this distinction can be explained by comparing Turing’s universal machine with Deleuze’s notion of the “objectile”.
As Stanley Mathews argued, the Fun Palace works in the same way as the universal machine.  The latter is a precursor of modern electronic computers, which can function as different devices – either as typewriters, drawing boards, or other machines – according to different codes they receive (Fig. 7).  Comparatively, “objectile” connotes a situation in which a series of variant objects is produced based on their shared algorithms (Fig. 8).  These products are so-called “non-standard series” whose key definition relates to their variance rather than form.83
While the universal machine seems to require more power to support its every change, an infinite one-dimensional tape on which its programmers can mark symbols of any instructions to claim its universality, non-standard production can operate on a smaller scale and under less demanding environments.  The emphasis on variance in non-standard production processes also indicates a shift of attention from the “process” underscored by second-order cybernetics towards the product of certain parametric models. When the latter is applied to architecture, the physical building regains its significance as the variable product.
However, it does not mean a total cut-off between cybernetics and non-standard production. Since human-machine interactions are crucial for customising according to users’ input, I maintain that mass-customisation reconnects architecture with first-order cybernetics whilst resisting the notion of chaos and complexity intrinsic in second-order cybernetics.
Such correlation can be justified by comparing two examples. First, the visionary project Flatwriter (1967) by the Hungarian architect Yona Friedman proposed a scenario in which users can choose their preferred apartment plan from several patterns of spatial configurations, locations, and orientations.  Based on their preferences, they would receive optimised feedback from the system (Fig. 9).  This optimisation process would consider issues concerning access to the building, comfortable environments, lighting, communication, and so on.  Given that it rejects chaos and uncertainty by adjusting users’ selections for certain patterns of order and layout, this user-computer interaction system is essentially an application of first-order cybernetics, as Yiannoudes argued.  Contemporary open-source architectural platforms are based on the same logic. As the founder of WikiHouse argued, since the target group of mass-customisation is the 99 per cent who are constantly overlooked by the normative production of buildings after the retreat of state intervention, designing “normal” environments for them is the primary concern – transgression and disorder should be set aside.  As Figure 10 illustrates, similarly to Flatwriter, in theory, WikiHouse would pre-set design rules and offer design proposals according to calculations of the parametric model.  These rules would follow a “LEGO-like system”, which produces designs by arranging and composing standard types or systems.  Both Flatwriter’s optimisation and WikiHouse’s “LEGO-like system” are pursuing design in accordance with patterns, and discouraging chaotic results.
Nevertheless, neither Flatwriter nor WikiHouse has achieved what is supposed to be an automatic process of using parametric models to generate a variety of designs. For Flatwriter, the last mile of automation could be ascribed to the unavailability of computers capable of performing calculations or processing images. For WikiHouse, the project has not yet fulfilled its promise of developing algorithms for design rules that resemble how the “LEGO blocks” are organised. Specifically, in the current stage, plans, components and structures of WikiHouse are designed in SketchUp by hand.  The flexibility granted to users is achieved by grouping plywood lumber into components and allowing users to duplicate them (Fig. 11). Admittedly, if users are proficient in Sketchup, they could possibly customise their WikiHouse on demand – but that would then go against the promise of democratising buildings through open-source platforms. 
Consequently, the last mile of automation again causes a conundrum of architectural authorship. Firstly, in both cases, never mind “the death of the author”, it appears that there is no author to be identified. One can argue that it signals a democratic spirit, anonymising the once Howard Roark-style architects and substituting them with a “creative common”. Nonetheless, it must be cautioned that such substitution takes time, and during this time, architects are obliged to be involved when automation fails. To democratise buildings is not to end architects’ authorship over architecture, but conceivably, for a long time, to be what Ratti and Claudel called “choral architects”, who are at the intersection of top-down and bottom-up, orchestrating the transition from the information age of scale to the post-information age of collaboration and interactivity.  Although projects with similar intentions of generating design and customising housing through parametric models – such as Intelligent City and Nabr – may prove to be more mature in their algorithmic process, architects are still required to coordinate across extensive sectors – clients’ inputs, design automation, prefabrication, logistics, and construction.  Architectural authorship in this sense is not definitive but relational, carrying multitudes of meanings and involving multiplicities of agents. 
In addition, it would be inaccurate to claim architectural authorship by the user, even though these projects all prioritise users’ opinions in the design process. By hailing first-order cybernetics while rejecting the second-order, advocating order while disapproving disorder, they risk the erasure of architectural authorship – just as those who play with LEGO do not have authorship over the brand, to extend the metaphor of the “LEGO-like system” in WikiHouse. Especially as the digital turn in terms of technology does not guarantee a cognitive turn in terms of thinking.  Assuming that the capitalist characteristics of production will not change, technological advancements are likely to be appropriated by corporate and state power, either by means of monopoly or censorship.
This erasure of human agency should be further elucidated in relation to the suppression of chaos in these systems. As Robin Evans explained, there are two types of methods to address chaos: (1) preventing humans from making chaos by organising humans; and (2) limiting the effects of chaotic environments by organising the system.  While Flatwriter and WikiHouse choose to conform according to the former at the expense of diminishing human agency, it is necessary to reinvite observers and chaos as an integral part of the system towards mass-customisation and mass-collaboration (Fig. 12).
For Walter Benjamin, “the angel of history” moves into the future with its face turned towards the past, where wreckages were piled upon wreckages.  For me, addressing the paradox of “the last mile” in the history of architectural digitalisation is this backward gaze that can possibly provide a different angle to look into the future.
This article mainly discussed three moments in architectural history when technology failed to live up to the expectation of full automation/digitalisation. Such failure is where “the last mile” lies. I employ “the last mile” as a perspective to scrutinize architectural authorship in these moments of digital revolutions. Before the information age, the Albertian notational system can be regarded as one of the earliest attempts to digitalise architecture. Alberti’s insistence on the identical copying between designers’ drawings and buildings resulted in the divide between architects as intellectuals and artisans as labourers. However, this allographic mode of architectural authorship was not widely accepted even into the late 20th century.
At the turn of the information age and post-information age, Cedric Price’s Fun Palace was another attempt made by architects to respond to the digital revolution in the post-war era. It was influenced by second-order cybernetics theories that focused on the flow of information and the computational process. Buildings were deemed only as a catalyst, and architectural authorship was shared between architects and users. Yet by examining how the Fun Palace failed in the last mile, I put forward the idea that this authorship should also be attributed to technicians and ghost workers assisting the computation processes behind the stage.
Finally, I analysed two case studies of open-source architectural platforms established for mass-customisation. By comparing Flatwriter of the cybernetics era and WikiHouse of the post-information age, I cautioned that both systems degrade architectural authorship into emptiness, by excluding users and discouraging acts of chaos. Also, by studying how these systems fail in the last mile, I position architects as “choral architects” who mediate between the information and post-information age. Subsequently, architectural authorship in the age of mass-customisation and mass-collaboration should be regarded as relational, involving actors from multiple positions.
- Mary L. Gray and Siddharth Suri, Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass (New York: Houghton Mifflin Harcourt Publishing Company, 2019).
- Gray and Suri.
- Gray and Suri.
- Mario Carpo, The Alphabet and the Algorithm (London: The MIT Press, 2011), p. 22.
- Carpo, The Alphabet and the Algorithm, p. 22.
- Carpo, The Alphabet and the Algorithm, pp. 22–23.
- Mario Carpo, The Second Digital Turn: Design Beyond Intelligence (Cambridge, MA: The MIT Press, 2017), pp. 131, 140.
- Antoine Picon, ‘From Authorship to Ownership’, Architectural Design, 86.5 (2016), pp. 39–40.
- Picon, ‘From Authorship to Ownership’, pp. 39 & 41.
- Philip F. Yuan and Xiang Wang, ‘From Theory to Praxis: Digital Tools and the New Architectural Authorship’, Architectural Design, 88.6 (2018), 94–101 (p. 101) <https://doi.org/10.1002/ad.2371>.
- ‘“The Last Mile” An Exciting Play’, New Leader with Which Is Combined the American Appeal, 10.18 (1930), 6; Benjamin B Ferencz, ‘Defining Aggression–The Last Mile’, Columbia Journal of Transnational Law, 12.3 (1973), 430–63; John Osborne, ‘The Last Mile’, The New Republic (Pre-1988) (Washington, 1980), 8–9.
- Donald F Burnside, ‘Last-Mile Communications Alternatives’, Networking Management, 1 April 1988, 57-.
- Mikko Punakivi, Hannu Yrjölä, and Jan Holmström, ‘Solving the Last Mile Issue: Reception Box or Delivery Box?’, International Journal of Physical Distribution and Logistics Management, 31.6 (2001), 427–39 <https://doi.org/10.1108/09600030110399423>.
- Gray and Suri, p. 12.
- Gray and Suri, p. 12.
- Matteo Pasquinelli and Vladan Joler, ‘The Nooscope Manifested: AI as Instrument of Knowledge Extractivism’, 2020, pp. 1–23 (p. 19) <https://doi.org/10.1007/s00146-020-01097-6>.
- Gray and Suri, pp. 12 & 71.
- Carpo, The Second Digital Turn: Design Beyond Intelligence, pp. 9, 18 & 68.
- Carpo, The Second Digital Turn: Design Beyond Intelligence, pp. 5, 18 & 68.
- James Beniger, The Control Revolution: Technological and Economic Origins of the Information Society (London: Harvard University Press, 1986), p. 295.
- Hamid R. Ekbia and Bonnie Nardi, Heteromation, and Other Stories of Computing and Capitalism (Cambridge, Massachusetts: The MIT Press, 2017), p. 25.
-  Ekbia and Nardi, pp. 25-6.
-  Michael L. Dertouzos, ‘Individualized Automation’, in The Computer Age: A Twenty-Year View, ed. by Michael L. Dertouzos and Joel Moses, 4th edn (Cambridge, Massachusetts: The MIT Press, 1983), p. 52.
- Ekbia and Nardi, p. 26.
- Antoine Picon, Digital Culture in Architecture : An Introduction for the Design Professions (Basel: Birkhäuser, 2010), p. 16.
- Beniger, p. 433.
- Picon, Digital Culture in Architecture : An Introduction for the Design Professions, pp. 24–26.
- Nicholas Negroponte, Being Digital (New York: Vintage Books, 1995), pp. 11 & 16.
- Negroponte, pp. 163–64.
- Carpo, The Alphabet and the Algorithm, p. 12.
- Carpo, The Alphabet and the Algorithm, pp. 54–55.
- Carpo, The Alphabet and the Algorithm, p. 26.
- Leon Battista Alberti, On Painting, trans. by Rocco SiniSgalli (Cambridge: Cambridge University Press, 2011), p. 45.
- Alberti, On Painting, p. 23.
- Leon Battista Alberti, The Ten Books of Architecture (Toronto: Dover Publications, Inc, 1986), p. 1.
- Carpo, The Alphabet and the Algorithm, p. 27.
- ‘Architectural Intentions from Vitruvius to the Renaissance’ [online] <https://f12arch531project.fil es.wordpress.com/2012/10/xproulx-4.jpg>; ‘Alberti’s Diffinitore’ http://www.thesculptorsfuneral.com /episode-04-alberti-and-de-statua/7zf3hfxtgyps12r9igveuqa788ptgj [accessed 23 April 2021].
- Giorgio Vasari, The Lives of the Artists, trans. by Julia Conaway & Peter Bondanella (Oxford: Oxford University Press, 1998), p. 182.
- Vasari, p. 181.
- Alberti, The Ten Books of Architecture, p. 22.
- Alberti, The Ten Books of Architecture, p. 22.
- Alberti, The Ten Books of Architecture, p. 22.
- Vasari, p. 183.
- Mary Hollingsworth, ‘The Architect in Fifteenth-Century Florence’, Art History, 7.4 (1984), 385–410 (p. 396).
- Adrian Forty, Words and Buildings: A Vocabulary of Modern Architecture (New York: Thames & Hudson, 2000), p. 138.
- Forty, p. 138.
- Forty, p. 137; Carpo, The Alphabet and the Algorithm, p. 78.
- Picon, Digital Culture in Architecture : An Introduction for the Design Professions, p. 20.
- Mario Carpo, ‘Myth of the Digital’, Gta Papers, 2019, 1–16 (p. 3).
- N. Katherine Hayles, ‘Cybernetics’, in Critical Terms for Media Stuies, ed. by W.J.T. Mitchell and Mark B.N. Hansen (Chicago and London: The University of Chicago Press, 2010), p. 145.
- Hayles, p. 149.
- Hayles, pp. 149–50.
- Socrates Yiannoudes, Architecture and Adaptation: From Cybernetics to Tangible Computing (New York and London: Taylor & Francis, 2016), p. 11; Hayles, p. 150.
- Hayles, p. 150.
- Stephen Wolfram, A New Kind of Science (Champaign: Wolfram Media, Inc., 2002), pp. 1, 5 & 14.
- Arata Isozaki, ‘Erasing Architecture into the System’, in Re: CP, ed. by Cedric Price and Hans-Ulrich Obrist (Basel: Birkhäuser, 2003), pp. 25–47 (p. 35).
- Yiannoudes, p. 29.
- Yiannoudes, p. 14.
- Stanley Mathews, ‘The Fun Palace as Virtual Architecture: Cedric Price and the Practices of Indeterminacy’, Journal of Architectural Education, 59.3 (2006), 39–48 (p. 43); Yiannoudes, p. 26.
- Isozaki, p. 34; Yiannoudes, p. 50.
- Stanley Mathews, p. 47.
- Cedric Price and Joan Littlewood, ‘The Fun Palace’, The Drama Review, 12.3 (1968), 127–34 (p. 130).
- Price and Littlewood, p. 130.
- Forty, p. 148.
- Jonathan Hill, Actions of Architecture (London: Routledge, 2003), pp. 68–69.
- Isozaki, p. 34.
- Isozaki, p. 35.
- Reyner Banham, Megastructure: Urban Futures of the Recent Past (London: Thames and Hudson, 1972).
- Price and Littlewood, p. 133.
- Forty, pp. 142-8.
- Yiannoudes, p. 29.
- Yiannoudes, p. 31.
- Gray and Suri, pp. 33–34.
- Gray and Suri, p. 34.
- Cedric Price, Fun Palace Project (1961-1985), <https://www.cca.qc.ca/en/archives/380477/cedric-price-fonds/396839/projects/399301/fun-palace-project#fa-obj-309847> [accessed 25 April 2021].
- Pasquinelli and Joler, p. 19.
- Yiannoudes, p. 18; Carpo, ‘Myth of the Digital’, p. 11; Hayles, p. 145.
- Mario Carpo, ‘Myth of the Digital’, pp. 11–13.
- Carpo, ‘Myth of the Digital’, p. 13.
- Mathews, p. 42.
- Yiannoudes, p. 33.
- Carpo, The Alphabet and the Algorithm, p. 99.
- Carpo, The Alphabet and the Algorithm, p. 99.
- Yiannoudes, p. 50.
- Yiannoudes, p. 30.
- Yiannoudes, p. 30.
- Yiannoudes, p. 30.
- Yiannoudes, p. 31.
- Yiannoudes, p. 31.
- Alastair Parvin, ‘Architecture (and the Other 99%): Open-Source Architecture and the Design Commons’, Architectural Design: The Architecture of Transgression, 226, 2013, 90–95 (p. 95).
- Open Systems Lab, ‘The DfMA Housing Manual’, 2019 <https://docs.google.com/document/d/1OiLXP7QJ2h4wMbdmypQByAi_fso7zWjLSdg8Lf4KvaY/edit#> [accessed 25 April 2021].
- Open Systems Lab.
- Open Systems Lab.
- Carlo Ratti and Matthew Claudel, ‘Open Source Gets Physical: How Digital Collaboration Technologies Became Tangible’, in Open Source Architecture (London: Thames and Hudson, 2015).
- ‘An Introduction to WikiHouse Modelling’, dir. by James Hardiman, online film recording, YouTube, 5 June 2014, <https://www.youtube.com/watch?v=qB4rfM6krLc> [accessed 25 April 2021].
- Carlo Ratti and Matthew Claudel, ‘Building Harmonies: Toward a Choral Architect’, in Open Source Architecture (London: Thames and Hudson, 2015).
- Oliver David Krieg and Oliver Lang, ‘The Future of Wood: Parametric Building Platforms’, Wood Design & Building, 88 (2021), 41–44 (p. 44).
- Ratti and Claudel, ‘Building Harmonies: Toward a Choral Architect’.
- Carpo, The Second Digital Turn: Design Beyond Intelligence, p. 162.
- Robin Evans, ‘Towards “Anarchitecture”’, in Translations From Drawings to Building and Other Essays (从绘图到建筑物的翻译及其他文章), trans. by Liu Dongyang (Beijing: China Architecture & Building Press, 2018), p. 20.
- Walter Benjamin, Illuminations: Essays and Reflections (New York: Schocken Books, 2007), p. 12.