We live in a period of unprecedented proliferation of constructed, internally coherent virtual worlds, which emerge everywhere, from politics to video games. Our mediascape is brimming with rich, immersive worlds ready to be enjoyed and experienced, or decoded and exploited. One effect of this phenomenon is that we are now asking fundamental questions, such as what “consensus reality” is and how to engage with it. Another effect is that there is a need for a special kind of expertise that can deal with designing and organising these worlds – and that is where architects possibly have a unique advantage. Architectural thinking, as a special case of visual, analogy-based synthetic reasoning, is well positioned to become a crucial expertise, able to operate on multiple scales and in multiple contexts in order to map, analyse and organise a virtual world, while at the same time being able to introduce new systems, rules and forms to it.
A special case of this approach is something we can name architectural worldmaking, which refers broadly to practices of architectural design which wilfully and consciously produce virtual worlds, and understand worlds as the main project of architecture. Architects have a unique perspective and could have a say in how virtual worlds are constructed and inhabited, but there is a caveat which revolves around questions of agency, engagement and control. Worldmaking is an approach to learning from both technically-advanced visual and cultural formats such as video games, as well as scientific ways of imaging and sensing, in order to be able to construct new, legitimate, and serious ways of seeing and modelling.
These notions are central to the research seminar called “Games and Worldmaking”, first conducted by the author at SCI-Arc in summer of 2021, which focused on the intersection of games and architectural design, and foregrounded systems thinking as an approach to design. The seminar is part of the ongoing Views of Planet City project, in development at SCI-Arc for the Pacific Standard Time exhibition, which will be organised by the Getty Institute in 2024. In the seminar, we developed the first version of Planet Garden, a planetary simulation game, envisioned to be both an interactive model of complex environmental conditions and a new narrative structure for architectural worldmaking.
Planet Garden is loosely based on Edward O. Wilson’s “Half-Earth” idea, a scenario where the entire human population of the world occupies a single massive city and the rest is left to plants and animals. The Half Earth is an important and very interesting thought experiment, almost a proto-design, a prompt, an idea for a massive, planetary agglomeration of urban matter which could liberate the rest of the planet to heal and rewild.
The question of the game was, how could we actually model something like that? How do we capture all that complexity and nuance, how do we figure out stakes and variables and come up with consequences and conclusions? The game we are designing is a means to model and host hugely complex urban systems which unravel over time, while being able to legibly present an enormous amount of information visually and through the narrative. As a format, a simulation presents different ways of imaging the World and making sense of reality through models.
The work on game design started as a wide exploration of games and precedents within architectural design and imaging operations, as well as abstract systems that could comprise a possible planetary model. The question of models and modelling of systems comes at the forefront and becomes contrasted to existing architectural strategies of representation.
Mythologizing, Representing and Modelling
Among the main influences of this project were the drawings made by Alexander von Humboldt, whose work is still crucial for anyone with an interest in representing and modelling phenomena at the intersection of art and science. If, in the classical sense, art makes the world sensible while science makes it intelligible, these images are a great example of combining these forms of knowledge. Scientific illustrations, Humboldt once wrote, should “speak to the senses without fatiguing the mind”. His famous illustration of Chimborazo volcano in Ecuador shows plant species living at different elevations, and this approach is one of the very early examples of data visualisation, with an intent of making the world sensible and intelligible at the same time. These illustrations also had a strong pedagogical intent, a quality we wanted to preserve, and which can serve almost as a test of legibility.
The project started with a question of imaging a world of nature in the Anthropocene epoch. One of the reasons it is difficult to really comprehend a complex system such as the climate crisis is that it is difficult to model it, which also means to visually represent it in a legible way which humans can understand. This crisis of representation is a well-known problem in literature on the Anthropocene, most clearly articulated in the book Against the Anthropocene, by T.J. Demos.
We do not yet have the tools and formats of visualising that can fully and legibly describe such a complex thing, and this is, in a way, also a failure of architectural imagination. The standard architectural toolkit is limited and also very dated – it is designed to describe and model objects, not “hyperobjects”. One of the project’s main interests was inventing new modalities of description and modelling of complex systems through the interactive software format, and this is one of the ideas behind the Planet Garden project.
Contemporary representational strategies for the Anthropocene broadly fall into two categories, those of mythologising or objectivising. The first approach can be observed in the work of photographers such as Edward Burtynsky and Louis Helbig, where the subject matter of environmental disaster becomes almost a new form of the aesthetic sublime. The second strategy comes out of the deployment and artistic use of contemporary geospatial imaging tools. As is well understood by critics, contemporary geospatial data visualisation tools like Google Earth are embedded in a specific political and economic framework, comprising a visual system delivered and constituted by the post–Cold War and largely Western-based military-state-corporate apparatus. These tools offer an innocent-seeming picture that is in fact a “techno-scientific, militarised, ‘objective’ image”. Such an image displaces its subject and frames it within a problematic context of neutrality and distancing. Within both frameworks, the expanded spatial and temporal scales of geology and the environment exceed human and machine comprehension and thus present major challenges to representational systems.
Within this condition, the question of imaging – understood here as making sensible and intelligible the world of the Anthropocene through visual models – remains, and it is not a simple one. Within the current (broadly speaking) architectural production, this topic is mostly treated through the “design fiction” approach. For example, in the work of Design Earth, the immensity of the problem is reframed through a story-driven, narrative approach which centres on the metaphor, and where images function as story illustrations, like in a children’s book. Another approach is pursued by Liam Young, in the Planet City project, which focuses on video and animation as the main format. In this work, the imaging strategies of commercial science fiction films take the main stage and serve as anchors for the speculation, which serves a double function of designing a new world and educating a new audience. In both cases, it seems, the focus goes beyond design, as these constructed fictions stem from a wilful, speculative exaggeration of existing planetary conditions, to produce a heightened state which could trigger a new awareness. In this sense, these projects serve a very important educational purpose, as they frame the problem through the use of the established and accepted visual languages of storybooks and films.
The key to understanding how design fictions operate is precisely in their medium of production: all of these projects are made through formats (collage, storybook, graphic novel, film, animation) which depend on the logic of compositing. Within this logic, the work is made through a story-dependent arrangement of visual components. The arrangement is arbitrary as it depends only on the demands of the story and does not correspond to any other underlying condition – there is no model underneath. In comparison, a game such as, for example, SimCity is not a fiction precisely because it depends on the logic of a simulation: a testable, empirical mathematical model which governs its visual and narrative space. A simulation is fundamentally different from a fiction, and a story is not a model.
This is one of the reasons why it seems important to rethink the concept of design fiction through the new core idea of simulation. In the book Virtual Worlds as Philosophical Tools, Stefano Gualeni traces a lineage of thinking about simulations to Espen Aarseth’s 1994 text called Hyper/Text/Theory, and specifically to the idea of cybertextuality. According to this line of reasoning, simulations contain an element not found in fiction and thus need an ontological category of their own: “Simulations are somewhere between reality and fiction: they are not obliged to represent reality, but they have an empirical logic of their own, and therefore should not be called fictions.” This presents us with a fundamental insight into the use of simulations as the future of architectural design: they model internally coherent, testable worlds and go beyond mere fiction-making into worldmaking proper.
Simulations, games and systems
In the world of video games, there exists a genre of “serious” simulation games, which comprises games like Maxis software’s SimCity and The Sims, as well as some other important games like Sid Meier’s Civilization and Paradox Studio’s Stellaris. These games are conceptually very ambitious and extremely complex, as they model the evolution of whole societies and civilisations, operate on very long timescales, and consist of multiple nested models that simulate histories, economies and evolutions of different species at multiple scales. One important feature and obligation of this genre is to present a coherent, legible image of the world, to give a face to the immense complexity of the model. The “user interface” elements of these kinds of games work together to tell a coherent story, while the game world, rendered in full 3D in real time, provides an immersive visual and aesthetic experience for the player. Contrary to almost any other type of software, these interfaces are more indebted to the history of scientific illustration and data visualisation than they are to the history of graphic design. These types of games are open-ended and not bound to one goal, and there is rarely a clear win state.
Another feature of the genre is a wealth of underlying mathematical models, each providing for the emergence of complexity and each carrying its own assumptions and biases. For example, SimCity is well known (and some would say notorious) for its rootedness in Jay Forrester’s Urban Dynamics approach to modelling urban phenomena, which means that its mathematical model delivers very specific urban conditions – and ultimately, a very specific vision of what a city is and could be. One of the main questions in the seminar became how we might update this approach on two fronts: by rethinking the mathematical model, and by rethinking urban assumptions of the conceptual model.
The work of the game designer Will Wright, the main designer behind the original SimCity, as well as The Sims and Spore, is considered to be at the origin of simulation games as a genre. Wright has developed a vast body of knowledge on modelling simulations, some of which he presented in his 2003 influential talk at the Game Developers Conference (GDC), titled “Dynamics for Designers”. In this talk, Wright outlines a fully-fledged theory of modelling of complex phenomena for interactivity, focusing on topics such as “How we can use emergence to model larger possibility spaces with simpler components”. Some of the main points: science is a modelling activity, and until now, it has used traditional mathematics as its primary modelling method. This has some limits when dealing with complex dynamic and emergent systems. Since the advent of the computer, simulation has emerged as an alternative way of modelling. These are very different: in Wright’s view, maths is a more linear process, with complex equations; simulation is a more parallel process with simpler components interacting together. Wright also talks about stochastic (random probability distribution) and Monte Carlo (“brute force”) methods as examples of the simulation approach.
Wright’s work was a result of a deep interest in exploring how non-linear models are constructed and represented within the context of interactive video games, and his design approach was to invent novel game design techniques based directly on System Dynamics, a discipline that deals with the modelling of complex, unpredictable and non-linear phenomena. The field has its roots in the cybernetic theories of Norbert Wiener, but it was formalised and created in the mid-1950s by Professor Jay Forrester at MIT, and later developed by Donella H. Meadows in her seminal book Thinking in Systems.
System dynamics is an approach to understanding the non-linear behaviour of complex systems over time using stocks, flows, internal feedback loops, table functions and time delays.[14,15] Forrester (1918–2016) was an American computer engineer and systems scientist, credited as the founding father” of system dynamics. He started by modelling corporate supply chains and went on to model cities by describing “the major internal forces controlling the balance of population, housing and industry within an urban area”, which he claimed could “simulate the life cycle of a city and predict the impact of proposed remedies on the system”. In the book Urban Dynamics, Forrester had turned the city into a formula with just 150 equations and 200 parameters. The book was very controversial, as it implied extreme anti-welfare politics and, through its “objective” mathematical model, promoted neoliberal ideas of urban planning.
In another publication, called World Dynamics, Forrester presented “World2”, a system dynamics model of our world which was the basis of all subsequent models predicting a collapse of our socio-technological-natural system by the mid 21st century. Nine months after World Dynamics, a report called Limits to Growth was published, which used the “World3” computer model to simulate the consequences of interactions between the Earth and human systems. Commissioned by the Club of Rome, the findings of the study were first presented at international gatherings in Moscow and Rio de Janeiro in the summer of 1971, and predicted societal collapse by the year 2040. Most importantly, the report put the idea of a finite planet into focus.
The main case study in the seminar was Wright’s 1990 game SimEarth, a life simulation video game in which the player controls the development of a planet. In developing SimEarth, Wright worked with the English scientist James Lovelock, who served as an advisor and whose Gaia hypothesis of planetary evolution was incorporated into the game. Continuing the systems dynamics approach developed for SimCity, SimEarth was an attempt to model a scientifically accurate approximation of the entire Earth system through the application of customised systems dynamics principles. The game modelled multiple interconnected systems and included realistic feedback between land, ocean, atmosphere, and life itself. The game’s user interface even featured a “Gaia Window”, in direct reference to the Gaia theory which states that life plays an intimate role in planetary evolution and the regulation of planetary systems.
One of the tutorial levels for the SimEarth featured a playable model of Lovelock’s “Daisyworld” hypothesis, which postulates that life itself evolves to regulate its environment, forming a feedback loop and making it more likely for life to thrive. During the development of a life-detecting device for NASA’s Viking lander mission to Mars, Lovelock made a profound observation, that life tends to increase the order of its surroundings, and that studying the atmospheric composition of a planet will provide evidence enough of life’s existence. Daisyworld is a simple planetary model designed to show the long-term effects of coupling and interdependence between life and its environment. In its original form, it was introduced as a defence against criticism that his Gaia theory of the Earth as a self-regulating homeostatic system requires teleological control rather than being an emergent property. The central premise, that living organisms can have major effects on the climate system, is no longer controversial.
In SimEarth, the planet itself is alive, and the player is in charge of setting the initial conditions as well as maintaining and guiding the outcomes through the aeons. Once a civilisation emerges, the player can observe the various effects, such as the impacts of changes in atmospheric composition due to fossil fuel burning, or the temporary expansion of ice caps in the aftermath of a major nuclear war. SimEarth’s game box came with a 212-page game manual that was at once a comprehensive tutorial on how to play and an engrossing lesson in Earth sciences: ecology, geology, meteorology and environmental ethics, written in accessible language that anyone could understand.
SimEarth and other serious simulation games in general represent a way that games could serve a function of public education while remaining a form of popular entertainment. This genre also represents an incredible validation of claims that video games can be valuable cultural artifacts. Ian Bogost writes: “This was a radical way of thinking about video games: as non-fictions about complex systems bigger than ourselves. It changed games forever – or it could have, had players and developers not later abandoned modelling systems at all scales in favor of representing embodied, human identities.”
Lessons that architectural design can learn from these games are many and varied, the most important one being that it is possible to think about big topics by employing models and systems while maintaining an ethos of exploration, play and public engagement. In this sense, one could say that a simulation game format might be a contemporary version of Humboldt’s illustration, with the added benefit of interactivity; but as we have seen, there is a more profound, crucial difference – this format goes beyond just a representation, beyond just a fiction, into worldmaking.
As a result of this research, the students in the seminar utilised Unreal Engine to create version one (v.1) of Planet Garden, a multi-scalar, interactive, playable model of a self-sustaining, wind and solar-powered robotic garden, set in a desert landscape. The simulation was envisioned as a kind of reverse city builder, where a goal of the game is to terraform a desert landscape by deploying different kinds of energy-producing technologies until the right conditions are met for planting and the production of oxygen. The basic game loop is based on the interaction between the player and four main resources: energy, water, carbon, and oxygen. In the seminar, we also created a comprehensive game manual. The aims of the project were to learn how to model dynamic systems and to explore how game workflows can be used as ways to address urban issues.
Planet Garden is projected to become a big game for the Getty exhibition; a simulation of a planetary ecosystem as well as a city for 10 billion people. We aim to model various aspects of the planetary city, and the player will be able to operate on multiple spatial sectors and urban scales. The player can explore different ways to influence the development and growth of the city and test many scenarios, but the game will also run on its own, so that the city can exist without direct player input. Our game utilises core design principles that relate to system dynamics, evolution, environmental conditions, and change. A major point is the player’s input and decision-making process, which influence the outcome of the game. The game will also be able to present conditions and consequences of this urban thought experiment, as something is always at stake for the player.
The core of the simulation-as-a-model idea is that design should have testable consequences. The premise of the project is not to construct a single truthful, total model of an environment but to explore ways of imaging the world through simulation and open new avenues for holistic thinking about interdependence of actors, scales and world systems. If the internet ushered a new age of billions of partial identarian viewpoints, all aggregating into an inchoate world gestalt, is it a time to rediscover a new image of the interconnected world?
 For a longer discussion on this, see O. M. Ungers, City Metaphors, (Cologne: Buchhandlung Walther Konig, 2011). For the central place of analogies in scientific modeling, see M. Hesse, Models and Analogies in Science, and also Douglas Hofstadter, Surfaces and Essences: Analogy as the Fuel and Fire of Thinking (Basic Books, 2013).
 The term “worldmaking” comes from Nelson Goodman’s book Ways of Worldmaking, and is used here to be distinguished from worldbuilding, a more narrow, commercially oriented term.
 For a great introduction to the life and times of Alexander Von Humboldt, see A. Wulf, The Invention of Nature: Alexander von Humboldt’s New World (New York: Alfred A. Knopf, 2015).
 Quoted in H. G. Funkhouser, “Historical development of the graphical representation of statistical data”, Osiris 3 (1937), 269–404.
 T. J. Demos, Against The Anthropocene (Berlin: Sternberg Press, 2016).
 T. J. Demos, Against The Anthropocene (Berlin: Sternberg Press 2016).
 Design Earth, Geostories, The Planet After Geoengineering (Barcelona: Actar, 2019 and 2021).
 L. Young, Planet City, (Melbourne: Uro Publications, 2020).
 For an extended discussion of the simulation as a format, see D. Jovanovic, “Screen Space, Real Time”, Monumental Wastelands 01, eds. D. Lopez and H. Charbel (2022).
 S. Gualeni, Virtual Worlds as Philosophical Tools, (Palgrave Macmillan, 2015)
 For an extended discussion on this, see Clayton Ashley, The Ideology Hiding in SimCity’s Black Box, https://www.polygon.com/videos/2021/4/1/22352583/simcity-hidden-politics-ideology-urban-dynamics
 W. Wright, Dynamics for Designers, GDC 2003 talk, https://www.youtube.com/watch?v=JBcfiiulw-8.
 D. H. Meadows, Thinking in Systems, (White River Junction: Chelsea Green Publishing, 2008).
 Arnaud M., “World2 model, from DYNAMO to R”, Towards Data Science, 2020, https://towardsdatascience.com/world2-model-from-dynamo-to-r-2e44fdbd0975.
 Wikipedia, “System Dynamics”, https://en.wikipedia.org/wiki/System_dynamics.
 Forrester, Urban Dynamics (Pegasus Communications, 1969).
 K. T. Baker, “Model Metropolis”, Logic 6, 2019, https://logicmag.io/play/model-metropolis.
 I. Bogost, “Video games Are Better Without Characters”, The Atlantic (2015), https://www.theatlantic.com/technology/archive/2015/03/video-games-are-better-without-characters/387556.
At present, we find ourselves in a critical instance: the current rate of food production is impossible to maintain in the face of the climate threat and new forms of social organisation have not yet been implemented to solve the problem. This project constitutes a possible response to the conditions we will inevitably soon be facing if we do not develop sustainable ways of life that promote coexistence between species.
The construction of a new paradigm requires the elimination of current divisions between the concepts of “natural” and “artificial”, and consequently the differentiation of the human from the rest of the planet’s inhabitants. This post-anthropocentric vision will build a new substratum to occupy which will promote the generation of an autarchic ecology based on the coexistence between living and non-living entities.
The thesis extends through three scales. The morphology adopted in each scale is determined by three parameters simultaneously. First, climate control through water performance; second, the material search for spaces that allow coexistence; and lastly, the historical symbolism to which the basilica typology refers.
On a territorial scale, the project consists of the generation of an artificial floodable territory occupied by vermiform palaces which are organised in an a-hierarchical manner as a closed system and take the form of an archipelago.
On the palatial scale, water is manipulated to generate a humidity control system that enables the recreation of different biomes inside the palaces through the permeability of their envelope.
Finally, on a smaller scale, the architecture becomes more organic and flexible, folding in on itself to constitute the functional units of the palaces, which aim for agricultural production, housing needs and leisure; the function of each unit depends on its relationship with water and its need to allow passage and retain it.
The entire project takes form from, on the one hand, the climatic situations that each palace requires to house its specific biome, and, on the other hand, the spatial characteristics required by the protocols that are executed in it. To allow the development of a new kind of ecology, the architecture that houses the new protocols of coexistence will be: agropalatial, a-hierarchical, sequential, stereotomic, and overflowing.
In the following chapters, we will develop in depth the architectural qualities mentioned above.
Post-Anthropocentric Ecologies: Theoretical Framework
We are currently living in the era of the Anthropocene, in which humans are considered a global geophysical force. Human action has transformed the geological composition of the Earth, producing a higher concentration of carbon dioxide and, therefore, global warming. This process began with the first Industrial Revolution, although it was only after 1945 that the Great Acceleration occurred, ensuring our planet’s course towards a less biologically diverse, much warmer and more volatile state. The large-scale physical transformations produced in the environment through extractive practices have blurred the boundaries between the “natural” and the “artificial”.
In Ecology Without Nature, Morton raises the need to create ecologies that dismiss the romantic idea of nature as something not yet sullied by human intervention – out of reach today – and go beyond a simple concern for the state of the planet, strengthening the existing relationships between humans and non-humans.
In this line of thought, we reject the concept of “nature” and consider its ecological characteristics to be reproducible through the climatic intelligence of greenhouses. These ecologies should be based on a principle of coexistence that not only allows but celebrates diversity and the full range of feelings and sensibilities that it evokes.
According to Bernard Tschumi, the relationship between the activities and the shape of the building can be one of reciprocity, indifference, or conflict. The type of relationship is what determines the architecture. In this thesis, morphology is at the service of water performance, hence why the activities that take place inside the agropalaces must redefine their protocols accordingly.
Palaces are large institutional buildings in which power resides. Their formal particularities have varied over time. However, some elements remain constant and can be defined as intrinsic to the concept of a palace, such as its large scale, the number of rooms, the variety of activities which it houses and the ostentation of luxury and wealth.
In the historical study of palaces, we recognised the impossibility of defining them through a specific typology. This is because their architecture was inherited from temples, whose different shapes are linked to how worship and ceremonies are performed. It is, therefore, possible to deduce that if there are changes in the behaviour of believers, this will generate new architectural needs.
In the same way that architecture as a discipline has the potential to control how we carry out activities based on the qualities of the space in which they take place, our behaviours also have the power to transform space since cultural protocols configure the abstract medium on which organisations are designed and standards of normality are set up. The more generic and flexible these spaces are, the longer they will last and the more resilient they will be.
The agropalace carries out a transmutation of power through which it frees itself from the human being as the centre and takes all the entities of the ecosystem as sovereign, understanding cohabitation as the central condition for the survival of the planet and human beings as a species.
The greenhouse typology appears as an architectural solution capable of regulating the climatic conditions in those places where there was a need to cultivate but where the climate was not entirely suitable. Agropalaces can not only incorporate productive spaces but generate entire ecosystems, becoming an architecture for the non-human.
We take as a reference the Crystal Palace. The Crystal Palace was designed for the London International Exhibition in 1851 by Joseph Paxton. The internal differentiation of its structural module, the height and the shape of its roof generate architectural conditions that shape it as a humidity-controlling container, which allows us to use it as the basis of our agropalatial prototype.
Our prototype based on the Crystal Palace is designed at first as a sequence of cross-sections. Their variables are the width and height of the section, the height and width of the central nave, the slope of the roof, the number of vaults, an infrastructural channel that transports water and, finally, the encounter with the floor. Each of these variables contributes to regulating the amount of water that each biome requires.
The territorial organisation of the agropalaces must be a-hierarchical for coexistence to take place. Cooperation between agropalaces is required for the system to function. This cooperation is based on water exchange from one palace to the other. For this to occur, vermiform palaces must be in a topography prone to flooding, organised in the form of an archipelago.
The prototype project is located in the Baix Llobregat Agrarian Park in Barcelona, which is crossed by the Llobregat river ending up in a delta in the Mediterranean Sea. The Agrarian Park currently grows food to supply to all the neighbouring cities. Our main interest in the site lies in its hydrographic network which is fundamental in the construction of the archipelago since the position of each agropalace depends on its distance to its closest water source.
To create a humidity map to determine the location of the palaces on the territory we use a generative adversarial network (GAN). A GAN is a type of AI in which systems can make correlations between variables, classify data and detect differences and errors between them through the study of algorithms. Their performance is improved as they are supplied with more data.
The GAN is trained with a dataset of 6000 images, each of them containing 4 channels of information in the form of coloured zones. Each channel represents the humidity of a specific biome. The position of the coloured zones is related to the distance to the water sources that each biome requires. The GAN analyses every pixel of the images to learn the patterns of the position of the channels and to create new possible location maps with emerging hybridisation between biomes.
The first four biomes are ocean, rainforest, tundra, and desert. Our choice for these extreme ecologies is related to the impact that global warming will have on them and the hypothesis that their hybridisation will produce less hostile and more habitable areas.
We conclude that the hybridisation made by AI is irreplaceable by human methods. As such, we consider AI part of the group of authors, even though a later curation of its production is carried out, constituting a post-anthropocentric thesis from its conception.
Due to the hybridisation, a gradient of nine biomes and their zones within the territory are recognised in the GAN outputs. These are, from wettest to driest: ocean, wetland, yunga, rainforest, forest, tundra, grassland, steppe, and desert. The wetter palaces will always be located at a shorter distance from the water supply points while the drier ones will be located closer to the transit networks. The GAN not only expands the range of a variety of biomes but also gives us unexpected organisations without losing respect for the rules previously established.
The chosen image is used as a floor plan and allows us to define the palatial limits, which are denoted by changes in colour.
The territory, initially flat, must become a differentiated topography so that the difference in the heights of the palaces eases access to water for those that require greater humidity.
The palaces are linear, but they contort to occupy their place without interrupting the adjoining palaces, following the central axis of the zone granted by the GAN.
This territorial organisation, a-hierarchical, longitudinal and twisted, forms two types of circulations: one aquatic and one dry. The aquatic palaces tend to form closed circuits, without specific arrival points. An idle circulation, unstructured, designed to admire the resulting landscape of canyons. The other, arid, runs through desertic palaces along its axis and joins the existing motorways in the Llobregat, crossing the Oasis.
The protocols of the post-Anthropocene must exist in a stereotomic architecture, a vast and massive territory, almost undifferentiated from the ground.
As mentioned above, our agropalatial prototype is designed as a sequence of cross-sections. Each section constitutes an envelope which formal characteristics are based on that of the Crystal Palace and modified concerning its need to hold water.
The determination of the interior spaces in each section depends on the fluxes of humidity necessary for generating the biome. The functional spaces are the result of the remaining space between the steam columns, the number of points where condensed water overflows towards the vaults, and the size of the central circulation channel.
The variation in organisation according to the needs of each biome creates different amounts of functional spaces, of different sizes and shapes, allowing the protocols to take place inside of them.
The interstices where the fluxes of humidity move are organised in such a way that the forces that travel through the surfaces of the functional spaces between them reach the ground on the sides of the palace, forming a system of structural frames.
The functional spaces in each cross-section are classified into three categories corresponding to the main protocols that take place inside of the agropalaces: production, housing and leisure.
The classification depends on the size, shape, distance to light and water of each functional space, predicting which one would be more convenient to house each protocol. Every cross-section contains at least one functional space of each kind.
These two-dimensional spaces are extruded, generating the “permanent” spaces, in which the activities are carried out. These form connections with the “permanent” spaces of the same category of the subsequent cross-section, forming “passage” spaces.
Thus, three unique, long, complex spaces – one for each protocol – run longitudinally through the palaces, in which activities are carried out in an interconnected and dynamic way. The conservation protocol – the biome itself – is the only non-sequential activity, since it is carried out in the interstice between the exterior envelope of the agropalace and the interior spaces.
The need for production has made cities and agricultural areas hyper-specialised devices, making their differences practically irreconcilable. However, we understand that this system is obsolete, which is why it is necessary to emphasise their deep connection and how indispensable they are to each other.
For this reason, agropalaces work through the articulation of different scales and programs, considering the three key pillars on which we must rely to build a new post-anthropocentric way of life – ecological conservation, agricultural production and human occupation – the latter prioritising leisure.
Protocol of Production
From currently available methods, we take hydroponic agriculture as the main means of production, together with aeroponic agriculture since both replace the terrestrial substrate with water rich in minerals.
The architectural organisation that shapes the agricultural protocol in the project is based on a central atrium that allows the water of the biome to condense and be redirected to the floodable platforms that surround it. In each biome, the density of the stalls, their depth, and the size of the central atrium vary in a linear gradient, ranging from algae and rice plantations to soybeans and fruit. The agricultural protocol in the agropalaces manages water passively, by surface condensation and gravity, generating a spiral distribution added to a central circulation that generates landscape while seeking to cultivate efficiently.
Protocol of Housing
In defining the needs for a House, Banham reduces it to an atmospheric situation, with no regard for its form. However, the dispossession of formal conditions allows us to modify the current housing protocol, through the ability to project a house whose shape is the result of passive climatic manipulation and the need to generate a variety of spatial organisations that do not restrict the type of social nuclei.
The spatial organisation of the house in the project is built through circulatory axes and rooms. The position of the circulatory axes and the number and size of the rooms vary depending on the biome, this time not based on humidity, but on the type of life that each ecological environment encourages. The height and width of the spaces also vary, generating the collision of rooms and thus allowing the formation of larger spaces or meta-rooms. The protocol of habitation in the agropalaces then allows a wide range of variation in which people are free to choose the form in which they wish to live, temporarily or permanently, individually or in groups.
Protocol of Leisure
Leisure is one of the essential activities of the post-Anthropocene because it frees human beings from their proletarian condition, characteristic of current capitalism, and connects them with the enjoyment of themselves and their surroundings. The leisure protocol in the thesis consists of a series of slabs with variable depths that constitute pools at different levels, interconnected by slides, which are to varying degrees twisted or straight, steep or shallow, and covered or uncovered.
The leisure protocol is based on the behaviour of water, which varies in each biome. The quantity, depth and position of the pools decrease in quantity the more desertic the biome that houses it is. In this way, water parks and dry staggered spaces are generated in which all kinds of games and sports are developed. In the agropalaces, contrary to being relegated to specific times and places, leisure becomes a form of existence itself.
Finally, to achieve coexistence, the architecture developed must be permeable. All the layers that contribute to the complexity of the project exchange fluids – mainly water – with the environment.
Water penetrates each of them, they use it to generate the desired ambient humidity for their biome and the excess then overflows on the roof. The system works sequentially, from the wettest to the driest biomes. Once the former palace overflows its residual water, the succeeding one can use it to its advantage until it eventually overflows again.
Inside every palace, a sequence of overflows on an intra-palatial scale is generated. Humidity enters the agropalace through its internal channel, where it evaporates and rises until it condenses on the surfaces of the functional organs and thus penetrates them to be used in different activities. The residuary water evaporates again until it overflows. The process consists of a cyclical system with constant recirculation.
The functional spaces’ envelopes have perforations in different sizes and positions to allow moisture to dissipate or condense as convenient. The overflowing quality of the system creates communication between the different scales of the architectural system, thus generating inter- and intra-palatial dependency.
Post-Anthropocentric Architecture: Conclusion
The agropalace understands coexistence as a necessary condition for the survival of the planet and human beings as a species. This new typology presents agriculture as the principal tool of empowerment and suggests a paradigm shift in which each society can define its policies for food production, distribution and consumption; meanwhile, it produces ecosystemic habitats with specific microclimatic qualities that allow the free development of all kinds of entities.
Biomatic Artefacts proposes an architecture whose forms do not interrupt the geological substrate but compose it, being part of the planetary ecology and simultaneously forming smaller-scale ecosystems within each palace and an autonomous ecosystem.
The protocols of today disappear to make room for the formation of a single para-protocol, since, contrary to being carried out in a single, invariable way, it exists because it has the quality of always being different, vast in spatial, temporal, and atmospheric variations. And in its wake, it generates a landscape of canyons and palaces that, in the interplay of reflections and translucency of water and glass, allows us to glimpse the ecological chaos of coexistence within.
We consider that the project lays the foundations for a continuation of ideas on agropalatial architecture and post-anthropocentric architecture, from which all kinds of new formal and material realities will come about.
The following paper was developed within the institutional framework of the School of Architecture and Urban Studies of Torcuato Di Tella University as a project thesis, with Lluis Ortega as full-time professor and Ciro Najle as thesis director.
 T. Morton, Hyperobjects: Philosophy and Ecology after the End of the World (Minnesota, USA: University of Minnesota Press, 2013).
 W. Steffen, P. Crutzen, J. McNeill, “The Anthropocene: Are Humans Now Overwhelming the Great Forces of Nature?”, AMBIO: A Journal of the Human Environment (2007), pp 614-621.
 T. Morton, Ecology Without Nature: Rethinking Environmental Aesthetics (Cambridge, USA: Harvard University Press, 2007).
 A. Reeser Lawrence, A. Schafer, “2 Architects, 10 Questions On Program, Rem Koolhaas + Bernard Tschumi” Praxis 8 (2010).
 C. Najle, The Generic Sublime (Barcelona, España: Actar, 2016).
 Set of base images with which the GAN trains by identifying patterns and thus learning their behaviours. In our case, the dataset is based on a set of possible biome location maps based on proximity to water sources and highways.
 R. Banham, F. Dallagret, “A Home Is Not a House”, Art in America, volumen 2 (1965) pp 70-79.
Data and its visualisation have been an important part of architectural design practice for many years, from data-driven mapping to building information modelling to computational design techniques, and now through the datasets that drive machine-learning tools. In architectural design research, data-driven practices can imbue projects with a sense of scientific rigour and objectivity, grounding design thinking in real-world environmental phenomena.
More recently, “critical data studies” has emerged as an influential interdisciplinary discourse across social sciences and digital humanities that seeks to counter assumptions made about data by invoking important ethical and socio-political questions. These questions are also pertinent for designers who work with data. Data can no longer be used as a raw and agnostic input to a system of analysis or visualisation without considering the socio-technical system through which it came into being. Critical data studies can expand and deepen the practice of working with data, enabling designers to draw on pertinent ideas in the emerging landscape around data ethics. Data visualisation and data-driven design can be situated in more complex creative and critical assemblages. This article draws on several ideas from critical data studies and explores how they could be incorporated into future design and visualisation projects.
Critical Data Studies
The field of critical data studies addresses data’s ethical, social, legal, economic, cultural, epistemological, political and philosophical conditions, and questions the singularly scientific empiricism of data and its infrastructures. By applying methodologies and insights from critical theory, we can move beyond a status quo narrative of data as advancing a technical, objective and positivist approach to knowledge.
Historical data practices have promoted false notions of neutrality and universality in data collection, which has led to unintentional bias being embedded into data sets. This recognition that data is a political space was explored by Lisa Gitelman in “Raw Data” Is an Oxymoron, in which she argues that data does not exist in a raw state, such as a natural resource, but is always undergoing a process of interpretation. The rise of big data is a relatively new phenomenon. Data harvested from extensive and nuanced facets of people’s lives signifies a shift in how we approach the implications for power asymmetries and ethics. This relationship between data and society is tied together through critical data studies.
The field emerged from the work of Kate Crawford and danah boyd, who in 2012 formulated a series of critical provocations given the rise of big data as an imperious phenomenon, highlighting its false mythologies. Rob Kitchen’s work has appraised data and data science infrastructures as a new social and cultural territory. Andrew Iliadis and Federica Russo use the theory of assemblages to capture the multitude of ways that already-composed data structures inflect and interact with society. These authors all seek to situate data in a socio-technical framework from which data cannot be abstracted. For them, data is an assemblage, a cultural text, and a power structure that must be available for interdisciplinary interpretation.
Data Settings and Decolonisation
Today, with the increasing access to large data sets and the notion that data can be extracted from almost any phenomena, data has come to embody a sense of agnosticism. Data is easily abstracted from its original context, ported to somewhere else, and used in a different context. Yanni Loukissas is a researcher of digital media and critical data studies who explores concepts of place and locality as a means of critically working with data. He argues that “data have complex attachments to place, which invisibly structure their form and interpretation”. Data’s meaning is tied to the context from which it came. However, the way many people work with data today, especially in an experimental context, assumes that the origin of a data set does not hold meaning and that data’s meaning does not change when it is removed from its original context.
In fact, Loukissas claims, “all data are local”, and the reconsideration of locality is an important critical data tactic. Asking where data came from, who produced it, when, and why, what instruments were used to collect it, what kind of conditioned audience was it intended for, and how might these invisible attributes inform its composition and interpretation are all questions that reckon with a data set’s origin story. Loukissas proposes “learning to analyse data settings rather than data sets”. The term “data set” evokes a sense of the discrete, fixed, neutral, and complete, whereas the term “data setting” counters these qualities and awakens us to a sense of place, time, and the nuances of context.
From a critical data perspective, we can ask why we strive for the digital and its data to be so place-agnostic, a totalising system of norms that erases the myriad of cultures? The myth of placelessness in data implies that everything can be treated equally by immutable algorithms. Loukissas concludes, “[o]ne reason universalist aspirations for digital media have thrived is that they manifest the assumptions of an encompassing and rarely questioned free market ideology”. We should insist upon data’s locality and multiple and specific origins to resist such an ideology.
“If left unchallenged, digital universalism could become a new kind of colonialism in which practitioners at the ‘periphery’ are made to conform to the expectations of a dominant technological culture.
If digital universalism continues to gain traction, it may yet become a self-fulfilling prophecy by enforcing its own totalising system of norms.”
Loukissas’ incorporation of place and locality into data practices comes from the legacy of postcolonial thinking. Where Western scientific knowledge systems have shunned those of other cultures, postcolonial studies have sought to illustrate how all knowledge systems are rooted in local- and time-based practices and ideologies. For educators and design practitioners grappling with how to engage in the emerging discourse of decolonisation in pedagogy, data practices and design, Loukissas’ insistence on reclaiming provenance and locality in the way we work with abstraction is one way into this work.
Situated Knowledge and Data Feminism
Feminist critiques of science have also invoked notions of place and locality to question the epistemological objectivity of science. The concept of situated knowledge comes from Donna Haraway’s work to envision a feminist science. Haraway is a scholar of Science and Technology Studies and has written about how feminist critiques of masculinity, objectivity and power can be applied to the production of scientific knowledge to show how knowledge is mediated by and historically grounded in social and material conditions. Situated knowledge can reconcile issues of positionality, subjectivity, and their inherently contestable natures to produce a greater claim to objective knowledge, or what Sarah Harding has defined as “strong objectivity”. Concepts of situatedness and strong objectivity are part of feminist standpoint theory. Patricia Hill Collins further proposes that the intersectional marginalised experiences of women and minorities – black women, for example – offer a distinctive point of view and experience of the world that should serve as a source for new knowledge that is more broadly applicable.
How can we take this quality of situatedness from feminist epistemology and apply it to data practices, specifically the visualisation of data? In their book Data Feminism, Catherine D’Ignazio and Lauren Klein define seven principles to apply feminist thinking to data science. For example, principle six asks us to “consider context” when making sense of correlations when working with data.
“Rather than seeing knowledge artifacts, like datasets, as raw input that can be simply fed into a statistical analysis or data visualisation, a feminist approach insists on connecting data back to the context in which they were produced. This context allows us, as data scientists, to better understand any functional limitations of the data and any associated ethical obligations, as well as how the power and privilege that contributed to their making may be obscuring the truth.”
D’Ignazio and Klein argue that “[r]efusing to acknowledge context is a power play to avoid power. It is a way to assert authoritativeness and mastery without being required to address the complexity of what the data actually represent”. Data feminism is an intersectional approach to data science that counters the drive toward optimisation and convergence in favour of addressing the stakes of intersectional power in data.
Design Practice and Critical Data Visualisation
The visualisation of data is another means of interpreting data. Data visualisation is part of the infrastructure of working with data and should also be open to critical methods. Design and visualisation are processes through which data can be treated with false notions of agnosticism and objectivity, or can be approached critically, questioning positionality and context. Even when data practices explore creative, speculative, and aesthetic-forward techniques, this can extend and enrich the data artefacts produced. Therefore, we should critically reflect on the processes and infrastructures through which we design and aestheticise data.
How can we take the concept of situatedness that comes out of critical data studies and deploy it in creative design practice? What representational strategies support thinking through situatedness as a critical data practice? Could we develop a situated data visualisation practice?
The following projects approach these questions using design research, digital humanities and critical computational approaches. They are experiments that demonstrate techniques in thinking critically about data and how that critique can be incorporated into data visualisation. The work also expands upon the visualisation of data toward the visualisation of computational processes and software infrastructure that engineer visualisations. There is also a shift between exploring situatedness as a notion of physical territory toward a notion of socio-political situatedness. The following works all take the form of short films, animations and simulations.
Cinematic data visualisation is a practice of visually representing data. It incorporates cinematic aesthetics, including an awareness of photography’s traditional aspects of framing, motion and focus, with contemporary virtual cinematography’s techniques of camera-matching and computer-generated graphics. This process intertwines and situates data in a geographic and climatic environment, and retains the data’s relationship with its source of origin and the relevance that holds for its meaning.
As a cinematic data visualisation, Alluvium presents the results of a geological study on the impact of diverted flood waters on a sediment channel in Death Valley, California. The scenes took their starting point from the research of Dr Noah Snyder and Lisa Kammer’s 2008 study. Gower Gulch is a 1941 diversion of a desert wash that uncovers an expedited view of geological changes that would normally have taken thousands of years to unfold but which have evolved at this site in several decades due to the strength of the flash floods and the conditions of the terrain.
Gower Gulch provides a unique opportunity to see how a river responds to an extreme change in water and sediment flow rates, presenting effects that could mimic the impact of climate change on river flooding and discharge. The wash was originally diverted to prevent further flooding and damage to a village downstream; today, it presents us with a microcosm of geological activity. The research paper presents data as historical water flow that can only be measured and perceived retrospectively through the evidence of erosion and sediment deposition at the site.
Alluvium’s scenes are a hybrid composition of film and digitally produced simulations that use the technique of camera-matching. The work visualises the geomorphological consequences of water beyond human-scale perception. A particle animation was developed using accurate topographic models to simulate water discharge over a significant period. Alluvium compresses this timeframe, providing a sense of a geological scale of time, and places the representation and simulation of data in-situ, in its original environment.
In Alluvium, data is rendered more accessible and palpable through the relationship between the computationally-produced simulation of data and its original provenance. The data’s situatedness takes place through the way it is embedded into the physical landscape, its place of origin, and how it navigates its source’s nuanced textures and spatial composition.
The hybridised cinematic style that is produced can be deconstructed into elements of narrative editing, place, motion, framing, depth of field and other lens-based effects. The juxtaposition of the virtual and the real through a cinematic medium supports a recontextualisation of how data can be visualised and how an audience can interpret that visualisation. In this case, it is about geographic situatedness, retaining the sense of physical and material qualities of place, and the particular nuances of the historical and climatic environment.
Death Valley National Park, situated in the Mojave Desert in the United States, is a place of extreme conditions. It has the highest temperature (57° Celsius) and the lowest altitude (86 metres below sea level) to be recorded in North America. It also receives only 3.8 centimetres of rainfall annually, registering it as North America’s driest place. Despite these extremes, the landscape has an intrinsic relationship with water. The territorial context is expressed through the cinematic whilst also connecting the abstraction of data to its place of origin.
For cinematic data visualisation, these elements are applied to the presentation of data, augmenting it into a more sensual narrative that loops back to its provenance. As a situated practice, cinematic data visualisation foregrounds a relationship with space and place. The connection between data and the context from which it was derived is retained, rather than the data being extracted, abstracted, and agnostically transferred to a different context in which site-specific meaning can be lost. As a situated practice, cinematic data visualisation grapples with ways to foreground relationships between the analysis and representation of data and its environmental and local situation.
LA River Nutrient Visualization
Another project in the same series, the LA River Nutrient Visualization, considers how incorporating cinematic qualities into data visualisation can support a sense of positionality and perspective amongst heterogeneous data sets. This can be used to undermine data’s supposed neutrality and promote an awareness of data containing various concerns and stakes of different groups of people. Visualising data’s sense of positionality and perspective is another tactic to produce a sense of situatedness as a critical data visualisation practice. Whilst the water quality data used in this project appeared the same scientifically, it was collected by different groups: locally organised communities versus state institutions. The differences in why the data was collected, and by whom, have a significance, and the project was about incorporating that in the representational strategy of data visualisation.
This visualisation analyses nutrient levels, specifically nitrogen and phosphorus, in the water of the Los Angeles River, which testify to pollution levels and portray the river’s overall health. Analysed spatially and animated over time, the data visualisation aims to provide an overview of the available public data, its geographic, seasonal and annual scope, and its limitations. Three different types of data were used: surface water quality data from state and national environmental organisations, such as the Environmental Protection Agency and the California Water Science Center; local community-organised groups, such as the River Watch programme by Friends of the Los Angeles River and citizen science group Science Land’s E-CLAW project; and national portals for remotely-sensed data of the Earth’s surface, such as the United States Geological Survey.
The water quality data covers a nearly-50-year period from 1966 to 2014, collected from 39 monitoring stations distributed from the river’s source to its mouth, including several tributaries. Analysis showed changes in the river’s health based on health department standards, with areas of significantly higher concentrations of nutrients that consistently exceeded Water Quality Objectives.
The water quality data is organised spatially using a digital elevation model (DEM) of the river’s watershed to create a geo-referenced 3D terrain model that can be cross-referenced with any GPS-associated database. A DEM is a way of representing remotely-captured elevation, geophysical, biochemical, and environmental data about the Earth’s surface. The data itself is obtained by various types of cameras and sensors attached to satellites, aeroplanes and drones as they pass over the Earth.
Analysis of the water data showed that the state- and national-organised data sets provided a narrow and inconsistent picture of nutrient levels in the river. Comparatively, the two community-organised data sets offered a broader and more consistent approach to data collection. The meaning that emerged in this comparison of three different data sets, how they were collected, and who collected them ultimately informed the meaning of the project, which was necessary for a critical data visualisation.
Visually, the data was arranged and animated within the 3D terrain model of the river’s watershed and presented as a voxel urban landscape. Narrative scenes were created by animating slow virtual camera pans within the landscape to visualise the data from a more human, low, third-person point of view. These datascapes were post-processed with cinematic effects: simulating a shallow depth of field, ambient “dusk-like” lighting, and shadows. Additionally, the computer-generated scenes were juxtaposed with physical camera shots of the actual water monitoring sites, scenes that were captured by a commercial drone. Unlike Alluvium, the two types of cameras are not digitally matched. The digital scenes locate and frame the viewer within the data landscape, whereas physical photography provides a local geographic reference point to the abstracted data. This also gives the data a sense of scale and invites the audience to consider each data collection site in relation to its local neighbourhood. The representational style of the work overall creates a cinematic tempo and mood, informing a more narrative presentation of abstract numerical data.
In this cinematic data visualisation, situatedness is engaged through the particular framing and points of view established in the scenes and through the juxtaposition of cinematography of the actual data sites. Here, place is social; it is about local context and community rather than a solely geographical sense of place. Cinematic aesthetics convey the “data setting” through a local and social epistemic lens, in contrast to the implied frameless and positionless view with which state-organised data is collected, including remotely-sensed data.
All the water data consisted of scientific measurements of nitrogen and phosphorus levels in the river. Numerically, the data is uniform, but the fact that different stakeholders collected it with different motivations and needs affects its interpretation. Furthermore, the fact of whether data has been collected by local communities or state institutions informs its epistemological status concerning agency, motivation, and environmental care practices.
Context is important to the meaning that the data holds, and the visualisation strategy seeks to convey a way to think about social and political equity and asymmetry in data work. The idea of inserting perspective and positionality into data is an important one. It is unusual to think of remotely-sensed data or water quality data as having positionality or a perspective. Many instruments of visualisation present their artefacts as disembodied. Remotely-sensed data is usually presented as a continuous view from everywhere and nowhere simultaneously. However, feminist thinking’s conception of situated knowledge asks us to remember positionality and perspective to counter the sense of framelessness in the traditional tools of data collection and analysis.
Cinema for Robots
Cinema for Robots was the beginning of an exploration into the system that visualises data, rather than data visualisation itself being the outcome. Cinema For Robots presents a technique to consider how to visualise computational process, instead of presenting data as only a fixed and retrospective artefact. The project critically investigates the technique of photogrammetry, using design to reflexively consider positionality in the production of a point cloud. In this case, the quality of situatedness is created by countering the otherwise frameless point cloud data visualisation with animated recordings of the body’s position behind the camera that produced the data.
Photogrammetry is a technique in which a 3D model is computationally generated from a series of digital photographs of a space (or object). The photographs are taken systematically from many different perspectives and overlapping at the edges, as though mapping all surfaces and angles of the space. From this set of images, an algorithm can compute an accurate model of the space represented in the images, producing a point cloud. In a point cloud, every point has a 3D coordinate that relates to the spatial organisation of the original space. Each point also contains colour data from the photographs, similarly to pixels, so the point cloud also has a photographic resemblance. In this project, the point cloud is a model of a site underneath the Colorado Street Bridge in Pasadena, California. It shows a mixture of overgrown bushes and large engineered arches underneath the bridge.
The image set was created from a video recording of the site from which still images were extracted. This image set was used as the input for the photogrammetry algorithm that produced the point cloud of the site. The original video recordings were then inserted back into the point cloud model, and their camera paths were animated to create a reflexive loop between the process of data collection and the data artefact it produced.
With photogrammetry; data, computation, and its representation are all entangled. Similarly to remotely-sensed data sets, the point cloud model expresses a framelessness, a perspective of space that appears to achieve, as Haraway puts it, “the god trick of seeing everything from nowhere”. By reverse-engineering the camera positions and reinserting them into the point cloud of spatial data points, there is a reflexive computational connection between data that appears perspectiveless and the human body that produced it. In the series of animations comprising the project, the focus is on the gap between the capturing of data and the computational process that visualises it. The project also juxtaposes cinematic and computational aesthetics to explore the emerging gaze of new technologies.
The project is presented as a series of animations that embody and mediate a critical reflection on computational process. In one animation, the motion of a hand-held camera creates a particular aesthetic that further accentuates the body behind the camera that created the image data set. It is not a smooth or seamless movement but unsteady and unrefined. This bodily camera movement is then passed on to the point cloud model, rupturing its seamlessness. The technique is a way to reinsert the human body and a notion of positionality into the closed-loop of the computational process. In attempting to visualise the process that produces the outcome, reflexivity allows one to consider other possible outcomes, framings, and positions. The animations experiment with a form of situated computational visualisation.
Automata I + II
This work took the form of a series of simulations that critically explored a “computer vision code library” in an open-ended way. The simulations continued an investigation into computational visualisation rather than data visualisation. The process sought to reverse-engineer machine vision software – an increasingly politically contentious technology – and critically reflect on its internal functionality. Here, source code is situated within a social and political culture rather than a neutral and technical culture. Instead of using a code library instrumentally to perform a task, the approach involves critically reading source code as a cultural text and developing reflexive visualisations that explore its functions critically.
Many tools we use in design and visualisation were developed in the field of computer vision, which engineers how computers see and make sense of the world, including through camera-tracking and the photogrammetry discussed previously. In Automata I, the OpenCV library (an open-source computer vision code library) was used. Computer vision is comprised of many functions layered on top of each other acting as matrices that filter and analyse images in different ways to make them interpretable by algorithms. Well-known filters are “blob-detection” and “background subtraction”. Simply changing a colour image to greyscale is also an important function within computer vision.
Layering these filters onto input images helps to understand the difference between how humans see the world and interpret it and how an algorithm is programmed to see the world and interpret it differently. Reading the code makes it possible to understand the pixel logic at play in the production of a filter, in which each pixel in an image computes its values based on the pixel values around it, producing various matrices that filter information in the image. The well-known “cellular automata” algorithm applies a similar logic; a “Langton’s ant” uses a comparable logic.
A series of simulations were created using a satellite image of a site in the Amazon called the Meeting of Waters, which is the confluence of two rivers, the dark-coloured Rio Negro and the sandy-coloured Amazon River. Each river has different speeds, temperatures and sediments, so the two rivers do not merge but flow alongside each other in the same channel, visibly demarcated by their different colours.
The simulations were created by writing a new set of rules, or pixel logics, to compute the image, which had the effect of “repatterning” it. Analogously, this also appeared to “terraform” the river landscape into a new composition. The simulations switch between the image that the algorithm “sees”, including the information it uses to compute and filter the image, and the image that we see as humans, including the cultural, social and environmental information we use to make sense of it. The visualisation tries to explore the notion of machine vision as a “hyperimage”, an image that is made up of different layers of images that each analyse patterns and relationships between pixels.
Automata II is a series of simulations that continue the research of machine vision techniques established in Automata I. This iteration looks further into how matrices and image analysis combine to support surveillance systems operating on video images. By applying similar pixel rule sets to those used in Automata I, the visualisation shows how the algorithm can detect motion in a video, separating figures in the foreground from the background, leading to surveillance.
In another visualisation, a video of a chameleon works analogously to explore how the socio-political function of surveillance emerges from the mathematical abstraction of pixel operations. Chameleons are well-known for their ability to camouflage themselves by blending into their environment (and in many cultures are associated with wisdom). Here the algorithm is programmed to print the pixels when it detects movement in the video and remain black when there is no movement. In the visualisation, the chameleon appears to reveal itself to the surveillance of the algorithm through its motion and camouflage itself from the algorithm through its stillness. An aesthetic contrast is created between an ancient animal captured by innovative technology; however, the chameleon resists the algorithm’s logic to separate background from foreground through its simple embodiment of stillness.
The work explores the coded gaze of a surveillance camera and how machine vision is situated in society, politically and apolitically, in relation to the peculiarly abstract pixel logics that drive it. Here, visualisation is a reverse-engineering of that coded gaze in order to politically situate source code and code libraries for social and cultural interpretation.
Applying critical theory to data practices, including data-driven design and data visualisation, provides a way to interrupt the adherence to the neutral-objective narrative. It offers a way to circulate data practices more justly back into the social, political, ethical, economic, legal and philosophical domains from which they have always derived. The visual techniques presented here, and the ideas about what form a critical data visualisation practice could take, were neither developed in tandem nor sequentially, but by weaving in and out of project developments, exhibition presentations, and writing opportunities over time. Thus, they are not offered as seamless examples but as entry points and options for taking a critical approach to working with data in design. The proposition of situatedness as a territorial, social, and political quality that emerges from decolonial and feminist epistemologies is one pathway in this work. The field of critical data studies, whilst still incipient, is developing a rich discourse that is opportune and constructive for designers, although not immediately associated with visual practice. Situatedness as a critical data visualisation practice has the potential to further engage the forms of technological development interesting to designers with the ethical debates and mobilisations in society today.
 L. Gitelman, “Raw Data” is an Oxymoron (Cambridge, MA: MIT Press, 2013).
 d. boyd and K. Crawford, “Critical Questions for Big Data: provocations for a cultural, technological, and scholarly phenomenon”, Information, Communication & Society 15 5 (2012), 662–79.
 R. Kitchen, The Data Revolution: big data, open data, data infrastructures & their consequences (Los Angeles, CA: Sage, 2014).
 A. Iliadis and F. Russo, “Critical Data Studies: an introduction”, Big Data & Society 3 2 (2016).
 Y. A. Loukissas, All Data are Local: thinking critically in a data-driven world (Cambridge, MA: MIT Press, 2019), 3.
 Ibid, 23.
 Ibid, 2.
 Ibid, 10.
 Ibid, 10.
 D. Haraway, “Situated Knowledges: the science question in feminism and the privilege of partial perspective”, Feminist Studies 14 3 (1988), 575–99.
 S. Harding, “‘Strong objectivity’: A response to the new objectivity question”, Synthese 104 (1995), 331–349.
 P. H. Collins, Black Feminist Thought: consciousness and the politics of empowerment (London, UK: HarperCollins, 1990).
 C. D’Ignazio and L. F. Klein, Data Feminism (Cambridge, MA: MIT Press, 2020),152.
 Ibid, 162.
 N. P. Snyder and L. L. Kammer, “Dynamic adjustments in channel width in response to a forced diversion: Gower Gulch, Death Valley National Park, California”, Geology 36 2 (2008), 187–190.
 D. Haraway, “Situated Knowledges: the science question in feminism and the privilege of partial perspective”, Feminist Studies 14 3 (1988), 575–99.