
25/10/2020
The design research presented here aims to develop a design methodology that can compute an architecture that participates within the new digital economy. As technology advances, the world needs to quickly adapt to each new advancement. Since the turn of the last century, technology has integrated itself within our everyday lives and deeply impacted the way in which we live. This relationship has been defined by TM Tsai et al. as “Online to Offline” or “O2O” for short.[1] What O2O means is defining virtually while executing physically, such as platform-based companies like Uber, AirBnb, and Groupon do. O2O allows for impact or disruption of the physical world to be made within the digital world. This has significantly affected economies around the world.
Paul Mason outlined in Post Capitalism: A Guide to our Future (2015) that developments in technology and the rise of the internet have created a decline in capitalism, which is being replaced by a new socio-economic system called “Post Capitalism”. As Mason describes,“technologies we’ve created are not compatible with capitalism […] once capitalism can no longer adapt to technological change”.[2] Traditional capitalism is being replaced by the digital economy, changing the way products are produced, sold and purchased. There is a new type of good which can be bought or sold: the digital product. Digital products can be copied, downloaded and moved an infinite number of times. Mason states that it is almost impossible to produce a digital product through a capitalist economy due to the nature of the digital product. An example he uses is a program or software that can be changed throughout time and copied with little to no cost.[3] The original producer of the product cannot regain their cost as one can with a physical good, leading to traditional manufacturers losing income from digital products. With the increase in digital products, the economy must be adapted.
In The Second Digital Turn (2017) Mario Carpo describes this phenomenon, stating that digital technologies are creating a new economy where production and transactions are done entirely algorithmically, and as a result are no longer time-consuming, labour intensive or costly. This leads to an economy which is constantly changing and adapting to the current status of the context in which it is in. Carpo describes the benefits of the digital economy as the following: “[…] it would appear that digital tools may help us to recreate some degree of the organic, spontaneous adaptivity that allowed traditional societies to function, albeit messily by our standards, before the rise of modern task specialisation.”[4]
Computational Machines
It is useful to look at the work of Kurt Gödel and his theorems for mathematical logic, which are the basis for computational logic. In his first theorem the term “axioms” is presented, which are true statements that can be proven as true. The theorem states that “If axioms do not contradict each other and are ‘listable’ some statements are true but cannot be proved.”[5] This means that any system based on mathematical statements, axioms, cannot prove everything unless additional axioms are added to the list. From this Gödel describes his second theorem, “A system of axioms cannot show its inconsistency.”[6] To relate this to programming, axioms can be seen as similar to code, yet everything cannot be proven from a single system of code.
Allen Turing’s work on computable numbers is a result of these two theorems by Gödel. Turing was designing a rigorous notion of effective computability based on the “Turing Machine”. The Turing Machine was to process any given information based on a set of rules, or a programme the machine follows, provided by the user for a specified intention. The machine is fed with an infinitely long tape, divided into squares, which contains a sequence of information. The machine would “scan” a symbol, “read” the given rules, “write” an output symbol, and then move to the next symbol. As Turning described, the “read” process refers back to the rule set provided: the machine would look through the rules, find the scanned symbol, then proceed to follow the instructions of the scanned symbol. The machine then writes a new symbol and moves to a new location, repeating the process over and over until it is told to by the ruleset to halt or stop the procedure and deliver an output.[7] Turing’s theories laid down the foundation for the idea of a programmable machine able to interpret given information based on a given programme.
When applying computational thinking to architecture, it becomes evident that a problem based in the physical requires a type of physical computation. By examining the work of John von Neumann in comparison with Lionel Sharples Penrose the difference between the idea of a physical computational machine and a traditional automata computation can be explored. In Arthur W. Burks’s essay ‘Von Neumann’s Self-Reproducing Automata’ (1969) he describes von Neumann’s idea of automata, or the way in which computers think and the logic to how they process data. Von Neumann developed simple computer automata that functioned on simple switches of “and”, “or”, and “not”, in order to explore how automata can be created that are similar to natural automata, like cells and a cellular nervous system, making the process highly organic and with it the ability to compute using physical elements and physical data. Von Neumann theorised of a kinetic computational machine that would contain more elements than the standard automata, functioning in a simulated environment. As Burks describes, the elements are “floating on the surface, […] moving back and forth in random motion, after the manner of molecules of a gas.”[8] As Burks states, von Neumann utilised this for “the control, organisational, programming, and logical aspects of both man-made automata […] and natural systems.”[9]
However this poses issues around difficulty of control, as the set of rules are simple but incomplete. To address this von Neumann experimented with the idea of cellular automata. Within cellular automata he constructs a series of grids that act as a framework for events to take place, or a finite list of states in which the cell can be. Each cell’s state has a relation to its neighbours. As states change in each cell, this affects the states of each cell’s neighbour.[10] This form of automata constructs itself entirely on a gridded and highly strict logical system.
Von Neumann’s concept for kinetic computation was modelled on experiments done by Lionel Sharples Penrose in 1957. Penrose experimented with the intention of understanding how DNA and cells self-replicate. He built physical machines that connected using hooks, slots and notches. Once connected the machines would act as a single entity, moving together forming more connections and creating a larger whole. Penrose experimented with multiple types of designs for these machines. He began with creating a single shape from wood, with notches at both ends and an angled base, allowing the object to rock on each side. He placed these objects along a rail, and by moving the rail forwards and backwards the objects interacted, and, at certain moments, connected. He designed another object with two identical hooks facing in opposite directions on a hinge. As one object would move into another, the hook would move up and interlock with a notch in the other element. This also allowed for the objects to be separated. If three of these objects were joined, and a fourth interlocked at the end, the objects would split into two equal parts. This enabled Penrose to create a machine which would self-assemble, then when it was too large, it would divide, replicating the behaviours of cellular mitosis.[11] These early physical computing machines would operate entirely on kinetic behaviour, encoding behaviours within the design of the machine itself, transmitting data physically.
Experimenting with Penrose: Physical Computation
The images included here are of design research into taking Penrose objects into a physics engine and testing them at a larger scale. By modifying the elements to work within multiple dimensions, certain patterns and groupings can be achieved which were not accessible to Penrose. Small changes to an element, as well as other elements in the field, affect each other in terms of how they connect and form different types of clusters.

In Figure X, there is a spiralling hook. Within the simulations the element can grow in size, occupying more area. It is also given a positive or negative rotation. The size of the growth represents larger architectural elements, and thus takes more of the given space within the field. This leads to a higher density of elements clustering. The rotation of the spin provides control over what particular elements will hook together. Positive and positive rotations will hook, as well as negative and negative ones, but opposite spins will repeal each other as they spin.

Through testing different scenarios, formations begin to emerge, continuously adapting as each object is moving. At a larger scale, how the elements will interact with each other can be planned for spatially. In larger simulations certain groupings can be combined together to create larger formations of elements connected through strings of hooked elements. This experimentation leads towards a new form of architecture referred to as “codividual architecture”, or a computable architectural space created using the interaction and continuous adaptation of spatial elements. The computation of space occurs when individual spaces fuse together, therefore becoming one new space indistinguishable from the original parts. This process continues, allowing codividual architecture of constant change and adaptability.
Codividual Automata
Codividual spaces can be further supported by utilising machine learning, which computes parts at the moment they fuse with other parts, the connection of spaces, the spaces that change, and how parts act as a single element once fused together. This leads to almost scaleless spatial types of infinite variations. Architectural elements move in a given field and through encoded functions – connect, move, change and fuse. In contrast to what von Neumann was proposing, where the elements move randomly similar to gaseous molecules, these elements can move and join based on an encoded set of rules.

Within this type of system that merges together principles of von Neumann’s automata with codividuality, traditional automata and state machines can be radically rethought by giving architectural elements the capacity for decision making by using machine learning. The elements follow a set of given instructions but also have additional knowledge allowing them to assess the environment in which they are placed. Early experiments, shown here in images of the thesis project COMATA, consisted of orthogonal elements that varied in scale, creating larger programmatic spaces that were designed to create overlaps, and interlock, with the movement of the element. The design allowed for the elements to create a higher density of clustering when they would interlock in comparison to a linear, end-to-end connection.

This approach offers a design methodology which takes into consideration not only the internal programme, structure and navigation of elements, but the environmental factors of where they are placed. Scale is undefined and unbounded: each part can be added to create new parts, with each new part created as the scale grows. Systems adapt to the contexts in they are placed, creating a continuous changing of space, allowing for an understanding of the digital economics of space in real time.
[1] T. M. Tsai, P. C. Yang, W. N. Wang, “Pilot Study toward Realizing Social Effect in O2O Commerce Services,” eds. Jatowt A. et al., Social Informatics, 8238 (2013).
[2] P. Mason, Postcapitalism: A Guide to Our Future, (Penguin Books, 2016), xiii.
[3] Ibid, 163.
[4] M. Carpo, The Second Digital Turn: Design Beyond Intelligence (Cambridge, Massachusetts: MIT Press, 2017), 154.
[5] P. Millican, Hilbert, Gödel, and Turing [Online] (2019), http://www.philocomp.net/computing/hilbert.htm, last accessed May 2 2019.
[6] Ibid.
[7] A. Turing, “On Computable Numbers, with an Application to the Entscheidungsproblem,” Proceedings of the London Mathematical Society, 1, 2-42, (1937), 231-232.
[8] A. W. Burks, Von Neumann's Self-Reproducing Automata; Technical Report (Ann Arbor: The University of Michigan, 1969), 1.
[9] A. W. Burks, Essay on Cellular Automata, Technical Report (Urbana: The University of Illinois Press, 1970), 5.
[10] A. W. Burks, Essay on Cellular Automata, Technical Report (Urbana: The University of Illinois Press, 1970), 7-8.
[11] L. S. Penrose, “Self-Reproducing Machines,” Scientific American, 200 (1959), 105-114.