Search
Mailing List
Back to Top
Issue 45 G
02/02/2023
ISSN 2634-8578
Curated By:
Jeannie Nguyen
Critical Practice, Data Feminism, Data Visualisation
Add to Basket
Share →
Figure 10 – A satellite image of the Meeting of Waters in the Amazon region in Brazil. The original image shows the confluence of two rivers that flow together but do not mix. Pixel operations driven by agents change the composition of the landscape.
Figure 10 – A satellite image of the Meeting of Waters in the Amazon region in Brazil. The original image shows the confluence of two rivers that flow together but do not mix. Pixel operations driven by agents change the composition of the landscape.
Situatedness: A Critical Data Visualisation Practice
03/08/2022
Critical Practice, Data Feminism, Data Visualisation, Decolonisation, Situatedness
Catherine Griffiths

catgriff@umich.edu
Add to Issue
Read Article: 5471 Words

Data and its visualisation have been an important part of architectural design practice for many years, from data-driven mapping to building information modelling to computational design techniques, and now through the datasets that drive machine-learning tools. In architectural design research, data-driven practices can imbue projects with a sense of scientific rigour and objectivity, grounding design thinking in real-world environmental phenomena.

More recently, “critical data studies” has emerged as an influential interdisciplinary discourse across social sciences and digital humanities that seeks to counter assumptions made about data by invoking important ethical and socio-political questions. These questions are also pertinent for designers who work with data. Data can no longer be used as a raw and agnostic input to a system of analysis or visualisation without considering the socio-technical system through which it came into being. Critical data studies can expand and deepen the practice of working with data, enabling designers to draw on pertinent ideas in the emerging landscape around data ethics. Data visualisation and data-driven design can be situated in more complex creative and critical assemblages. This article draws on several ideas from critical data studies and explores how they could be incorporated into future design and visualisation projects.

Critical Data Studies

The field of critical data studies addresses data’s ethical, social, legal, economic, cultural, epistemological, political and philosophical conditions, and questions the singularly scientific empiricism of data and its infrastructures. By applying methodologies and insights from critical theory, we can move beyond a status quo narrative of data as advancing a technical, objective and positivist approach to knowledge.

Historical data practices have promoted false notions of neutrality and universality in data collection, which has led to unintentional bias being embedded into data sets. This recognition that data is a political space was explored by Lisa Gitelman in “Raw Data” Is an Oxymoron, in which she argues that data does not exist in a raw state, such as a natural resource, but is always undergoing a process of interpretation.[1] The rise of big data is a relatively new phenomenon. Data harvested from extensive and nuanced facets of people’s lives signifies a shift in how we approach the implications for power asymmetries and ethics. This relationship between data and society is tied together through critical data studies.

The field emerged from the work of Kate Crawford and danah boyd, who in 2012 formulated a series of critical provocations given the rise of big data as an imperious phenomenon, highlighting its false mythologies.[2] Rob Kitchen’s work has appraised data and data science infrastructures as a new social and cultural territory.[3] Andrew Iliadis and Federica Russo use the theory of assemblages to capture the multitude of ways that already-composed data structures inflect and interact with society.[4] These authors all seek to situate data in a socio-technical framework from which data cannot be abstracted. For them, data is an assemblage, a cultural text, and a power structure that must be available for interdisciplinary interpretation.

Data Settings and Decolonisation

Today, with the increasing access to large data sets and the notion that data can be extracted from almost any phenomena, data has come to embody a sense of agnosticism. Data is easily abstracted from its original context, ported to somewhere else, and used in a different context. Yanni Loukissas is a researcher of digital media and critical data studies who explores concepts of place and locality as a means of critically working with data. He argues that “data have complex attachments to place, which invisibly structure their form and interpretation”.[5] Data’s meaning is tied to the context from which it came. However, the way many people work with data today, especially in an experimental context, assumes that the origin of a data set does not hold meaning and that data’s meaning does not change when it is removed from its original context.

In fact, Loukissas claims, “all data are local”, and the reconsideration of locality is an important critical data tactic.[6] Asking where data came from, who produced it, when, and why, what instruments were used to collect it, what kind of conditioned audience was it intended for, and how might these invisible attributes inform its composition and interpretation are all questions that reckon with a data set’s origin story. Loukissas proposes “learning to analyse data settings rather than data sets”.[7] The term “data set” evokes a sense of the discrete, fixed, neutral, and complete, whereas the term “data setting” counters these qualities and awakens us to a sense of place, time, and the nuances of context.

From a critical data perspective, we can ask why we strive for the digital and its data to be so place-agnostic, a totalising system of norms that erases the myriad of cultures? The myth of placelessness in data implies that everything can be treated equally by immutable algorithms. Loukissas concludes, “[o]ne reason universalist aspirations for digital media have thrived is that they manifest the assumptions of an encompassing and rarely questioned free market ideology”.[8] We should insist upon data’s locality and multiple and specific origins to resist such an ideology.

“If left unchallenged, digital universalism could become a new kind of colonialism in which practitioners at the ‘periphery’ are made to conform to the expectations of a dominant technological culture.

If digital universalism continues to gain traction, it may yet become a self-fulfilling prophecy by enforcing its own totalising system of norms.”[9]

Loukissas’ incorporation of place and locality into data practices comes from the legacy of postcolonial thinking. Where Western scientific knowledge systems have shunned those of other cultures, postcolonial studies have sought to illustrate how all knowledge systems are rooted in local- and time-based practices and ideologies. For educators and design practitioners grappling with how to engage in the emerging discourse of decolonisation in pedagogy, data practices and design, Loukissas’ insistence on reclaiming provenance and locality in the way we work with abstraction is one way into this work.

Situated Knowledge and Data Feminism

Feminist critiques of science have also invoked notions of place and locality to question the epistemological objectivity of science. The concept of situated knowledge comes from Donna Haraway’s work to envision a feminist science.[10] Haraway is a scholar of Science and Technology Studies and has written about how feminist critiques of masculinity, objectivity and power can be applied to the production of scientific knowledge to show how knowledge is mediated by and historically grounded in social and material conditions. Situated knowledge can reconcile issues of positionality, subjectivity, and their inherently contestable natures to produce a greater claim to objective knowledge, or what Sarah Harding has defined as “strong objectivity”.[11] Concepts of situatedness and strong objectivity are part of feminist standpoint theory. Patricia Hill Collins further proposes that the intersectional marginalised experiences of women and minorities – black women, for example – offer a distinctive point of view and experience of the world that should serve as a source for new knowledge that is more broadly applicable.[12]

How can we take this quality of situatedness from feminist epistemology and apply it to data practices, specifically the visualisation of data? In their book Data Feminism, Catherine D’Ignazio and Lauren Klein define seven principles to apply feminist thinking to data science. For example, principle six asks us to “consider context” when making sense of correlations when working with data.

“Rather than seeing knowledge artifacts, like datasets, as raw input that can be simply fed into a statistical analysis or data visualisation, a feminist approach insists on connecting data back to the context in which they were produced. This context allows us, as data scientists, to better understand any functional limitations of the data and any associated ethical obligations, as well as how the power and privilege that contributed to their making may be obscuring the truth.”[13]

D’Ignazio and Klein argue that “[r]efusing to acknowledge context is a power play to avoid power. It is a way to assert authoritativeness and mastery without being required to address the complexity of what the data actually represent”.[14] Data feminism is an intersectional approach to data science that counters the drive toward optimisation and convergence in favour of addressing the stakes of intersectional power in data.

Design Practice and Critical Data Visualisation

The visualisation of data is another means of interpreting data. Data visualisation is part of the infrastructure of working with data and should also be open to critical methods. Design and visualisation are processes through which data can be treated with false notions of agnosticism and objectivity, or can be approached critically, questioning positionality and context. Even when data practices explore creative, speculative, and aesthetic-forward techniques, this can extend and enrich the data artefacts produced. Therefore, we should critically reflect on the processes and infrastructures through which we design and aestheticise data.

How can we take the concept of situatedness that comes out of critical data studies and deploy it in creative design practice? What representational strategies support thinking through situatedness as a critical data practice? Could we develop a situated data visualisation practice?

The following projects approach these questions using design research, digital humanities and critical computational approaches. They are experiments that demonstrate techniques in thinking critically about data and how that critique can be incorporated into data visualisation. The work also expands upon the visualisation of data toward the visualisation of computational processes and software infrastructure that engineer visualisations. There is also a shift between exploring situatedness as a notion of physical territory toward a notion of socio-political situatedness. The following works all take the form of short films, animations and simulations.

Alluvium

Figure 1 – A situating shot of the Gower Gulch site, to capture both scales of assessment: wide-angle photography shows the geomorphological consequences of flood water on the landscape, whilst macro photography details the granular role of sedimentation.

Cinematic data visualisation is a practice of visually representing data. It incorporates cinematic aesthetics, including an awareness of photography’s traditional aspects of framing, motion and focus, with contemporary virtual cinematography’s techniques of camera-matching and computer-generated graphics. This process intertwines and situates data in a geographic and climatic environment, and retains the data’s relationship with its source of origin and the relevance that holds for its meaning.

As a cinematic data visualisation, Alluvium presents the results of a geological study on the impact of diverted flood waters on a sediment channel in Death Valley, California. The scenes took their starting point from the research of Dr Noah Snyder and Lisa Kammer’s 2008 study.[15] Gower Gulch is a 1941 diversion of a desert wash that uncovers an expedited view of geological changes that would normally have taken thousands of years to unfold but which have evolved at this site in several decades due to the strength of the flash floods and the conditions of the terrain.

Gower Gulch provides a unique opportunity to see how a river responds to an extreme change in water and sediment flow rates, presenting effects that could mimic the impact of climate change on river flooding and discharge. The wash was originally diverted to prevent further flooding and damage to a village downstream; today, it presents us with a microcosm of geological activity. The research paper presents data as historical water flow that can only be measured and perceived retrospectively through the evidence of erosion and sediment deposition at the site.

Figure 2 – A situated visualisation combining physical cinematography and virtual cinematography to show a particle simulation of flood waters. 

Alluvium’s scenes are a hybrid composition of film and digitally produced simulations that use the technique of camera-matching. The work visualises the geomorphological consequences of water beyond human-scale perception. A particle animation was developed using accurate topographic models to simulate water discharge over a significant period. Alluvium compresses this timeframe, providing a sense of a geological scale of time, and places the representation and simulation of data in-situ, in its original environment.

In Alluvium, data is rendered more accessible and palpable through the relationship between the computationally-produced simulation of data and its original provenance. The data’s situatedness takes place through the way it is embedded into the physical landscape, its place of origin, and how it navigates its source’s nuanced textures and spatial composition.

The hybridised cinematic style that is produced can be deconstructed into elements of narrative editing, place, motion, framing, depth of field and other lens-based effects. The juxtaposition of the virtual and the real through a cinematic medium supports a recontextualisation of how data can be visualised and how an audience can interpret that visualisation. In this case, it is about geographic situatedness, retaining the sense of physical and material qualities of place, and the particular nuances of the historical and climatic environment.

Figure 3 – The velocity of the particles is mapped to their colouration, visualising water’s characteristic force, directionality and turbulence. The simulation is matched to a particular site of undercut erosion, so that the particles appear to carve the physical terrain.

Death Valley National Park, situated in the Mojave Desert in the United States, is a place of extreme conditions. It has the highest temperature (57° Celsius) and the lowest altitude (86 metres below sea level) to be recorded in North America. It also receives only 3.8 centimetres of rainfall annually, registering it as North America’s driest place. Despite these extremes, the landscape has an intrinsic relationship with water. The territorial context is expressed through the cinematic whilst also connecting the abstraction of data to its place of origin.

For cinematic data visualisation, these elements are applied to the presentation of data, augmenting it into a more sensual narrative that loops back to its provenance. As a situated practice, cinematic data visualisation foregrounds a relationship with space and place. The connection between data and the context from which it was derived is retained, rather than the data being extracted, abstracted, and agnostically transferred to a different context in which site-specific meaning can be lost. As a situated practice, cinematic data visualisation grapples with ways to foreground relationships between the analysis and representation of data and its environmental and local situation.

LA River Nutrient Visualization

Figure 4 – Reconstruction of the site of study, the Los Angeles River watershed from digital elevation data, combined with nutrient data from river monitoring sites.

Another project in the same series, the LA River Nutrient Visualization, considers how incorporating cinematic qualities into data visualisation can support a sense of positionality and perspective amongst heterogeneous data sets. This can be used to undermine data’s supposed neutrality and promote an awareness of data containing various concerns and stakes of different groups of people. Visualising data’s sense of positionality and perspective is another tactic to produce a sense of situatedness as a critical data visualisation practice. Whilst the water quality data used in this project appeared the same scientifically, it was collected by different groups: locally organised communities versus state institutions. The differences in why the data was collected, and by whom, have a significance, and the project was about incorporating that in the representational strategy of data visualisation.

This visualisation analyses nutrient levels, specifically nitrogen and phosphorus, in the water of the Los Angeles River, which testify to pollution levels and portray the river’s overall health. Analysed spatially and animated over time, the data visualisation aims to provide an overview of the available public data, its geographic, seasonal and annual scope, and its limitations. Three different types of data were used: surface water quality data from state and national environmental organisations, such as the Environmental Protection Agency and the California Water Science Center; local community-organised groups, such as the River Watch programme by Friends of the Los Angeles River and citizen science group Science Land’s E-CLAW project; and national portals for remotely-sensed data of the Earth’s surface, such as the United States Geological Survey.

The water quality data covers a nearly-50-year period from 1966 to 2014, collected from 39 monitoring stations distributed from the river’s source to its mouth, including several tributaries. Analysis showed changes in the river’s health based on health department standards, with areas of significantly higher concentrations of nutrients that consistently exceeded Water Quality Objectives.

Figure 5 – Virtual cameras are post-processed to add lens-based effects such as shallow depth of field and atmospheric lighting and shadows. A low, third-person perspective is used to position the viewer with the data and its urban context.

The water quality data is organised spatially using a digital elevation model (DEM) of the river’s watershed to create a geo-referenced 3D terrain model that can be cross-referenced with any GPS-associated database. A DEM is a way of representing remotely-captured elevation, geophysical, biochemical, and environmental data about the Earth’s surface. The data itself is obtained by various types of cameras and sensors attached to satellites, aeroplanes and drones as they pass over the Earth.

Analysis of the water data showed that the state- and national-organised data sets provided a narrow and inconsistent picture of nutrient levels in the river. Comparatively, the two community-organised data sets offered a broader and more consistent approach to data collection. The meaning that emerged in this comparison of three different data sets, how they were collected, and who collected them ultimately informed the meaning of the project, which was necessary for a critical data visualisation.

Visually, the data was arranged and animated within the 3D terrain model of the river’s watershed and presented as a voxel urban landscape. Narrative scenes were created by animating slow virtual camera pans within the landscape to visualise the data from a more human, low, third-person point of view. These datascapes were post-processed with cinematic effects: simulating a shallow depth of field, ambient “dusk-like” lighting, and shadows. Additionally, the computer-generated scenes were juxtaposed with physical camera shots of the actual water monitoring sites, scenes that were captured by a commercial drone. Unlike Alluvium, the two types of cameras are not digitally matched. The digital scenes locate and frame the viewer within the data landscape, whereas physical photography provides a local geographic reference point to the abstracted data. This also gives the data a sense of scale and invites the audience to consider each data collection site in relation to its local neighbourhood. The representational style of the work overall creates a cinematic tempo and mood, informing a more narrative presentation of abstract numerical data.

Figure 6 – Drone-captured aerial video of each data site creates an in-situ vignette of the site’s local context and puts the data back into communication with its local neighbourhood. This also speaks to the visualisation’s findings that community organisation and citizen science was a more effective means of data collection and should be recognised in the future redevelopment of the LA River.

In this cinematic data visualisation, situatedness is engaged through the particular framing and points of view established in the scenes and through the juxtaposition of cinematography of the actual data sites. Here, place is social; it is about local context and community rather than a solely geographical sense of place. Cinematic aesthetics convey the “data setting” through a local and social epistemic lens, in contrast to the implied frameless and positionless view with which state-organised data is collected, including remotely-sensed data.

All the water data consisted of scientific measurements of nitrogen and phosphorus levels in the river. Numerically, the data is uniform, but the fact that different stakeholders collected it with different motivations and needs affects its interpretation. Furthermore, the fact of whether data has been collected by local communities or state institutions informs its epistemological status concerning agency, motivation, and environmental care practices.

Context is important to the meaning that the data holds, and the visualisation strategy seeks to convey a way to think about social and political equity and asymmetry in data work. The idea of inserting perspective and positionality into data is an important one. It is unusual to think of remotely-sensed data or water quality data as having positionality or a perspective. Many instruments of visualisation present their artefacts as disembodied. Remotely-sensed data is usually presented as a continuous view from everywhere and nowhere simultaneously. However, feminist thinking’s conception of situated knowledge asks us to remember positionality and perspective to counter the sense of framelessness in the traditional tools of data collection and analysis.

Cinema for Robots

Figure 7 – A point cloud model of the site underneath the Colorado Street Bridge in Pasadena, CA, showing a single camera position from the original video capture.

Cinema for Robots was the beginning of an exploration into the system that visualises data, rather than data visualisation itself being the outcome. Cinema For Robots presents a technique to consider how to visualise computational process, instead of presenting data as only a fixed and retrospective artefact. The project critically investigates the technique of photogrammetry, using design to reflexively consider positionality in the production of a point cloud. In this case, the quality of situatedness is created by countering the otherwise frameless point cloud data visualisation with animated recordings of the body’s position behind the camera that produced the data.

Photogrammetry is a technique in which a 3D model is computationally generated from a series of digital photographs of a space (or object). The photographs are taken systematically from many different perspectives and overlapping at the edges, as though mapping all surfaces and angles of the space. From this set of images, an algorithm can compute an accurate model of the space represented in the images, producing a point cloud. In a point cloud, every point has a 3D coordinate that relates to the spatial organisation of the original space. Each point also contains colour data from the photographs, similarly to pixels, so the point cloud also has a photographic resemblance. In this project, the point cloud is a model of a site underneath the Colorado Street Bridge in Pasadena, California. It shows a mixture of overgrown bushes and large engineered arches underneath the bridge.

Figure 8 – A perspective of the bridge looking upwards with two camera positions that animate upwards in sync with the video.

The image set was created from a video recording of the site from which still images were extracted. This image set was used as the input for the photogrammetry algorithm that produced the point cloud of the site. The original video recordings were then inserted back into the point cloud model, and their camera paths were animated to create a reflexive loop between the process of data collection and the data artefact it produced.

With photogrammetry; data, computation, and its representation are all entangled. Similarly to remotely-sensed data sets, the point cloud model expresses a framelessness, a perspective of space that appears to achieve, as Haraway puts it, “the god trick of seeing everything from nowhere”.[16] By reverse-engineering the camera positions and reinserting them into the point cloud of spatial data points, there is a reflexive computational connection between data that appears perspectiveless and the human body that produced it. In the series of animations comprising the project, the focus is on the gap between the capturing of data and the computational process that visualises it. The project also juxtaposes cinematic and computational aesthetics to explore the emerging gaze of new technologies.

Figure 9 – Three camera positions are visible and animated simultaneously to show the different positions of the body capturing the video that was the input data for the point cloud.

The project is presented as a series of animations that embody and mediate a critical reflection on computational process. In one animation, the motion of a hand-held camera creates a particular aesthetic that further accentuates the body behind the camera that created the image data set. It is not a smooth or seamless movement but unsteady and unrefined. This bodily camera movement is then passed on to the point cloud model, rupturing its seamlessness. The technique is a way to reinsert the human body and a notion of positionality into the closed-loop of the computational process. In attempting to visualise the process that produces the outcome, reflexivity allows one to consider other possible outcomes, framings, and positions. The animations experiment with a form of situated computational visualisation.

Automata I + II

Figure 10 – A satellite image of the Meeting of Waters in the Amazon region in Brazil. The original image shows the confluence of two rivers that flow together but do not mix. Pixel operations driven by agents change the composition of the landscape.

This work took the form of a series of simulations that critically explored a “computer vision code library” in an open-ended way. The simulations continued an investigation into computational visualisation rather than data visualisation. The process sought to reverse-engineer machine vision software – an increasingly politically contentious technology – and critically reflect on its internal functionality. Here, source code is situated within a social and political culture rather than a neutral and technical culture. Instead of using a code library instrumentally to perform a task, the approach involves critically reading source code as a cultural text and developing reflexive visualisations that explore its functions critically.

Many tools we use in design and visualisation were developed in the field of computer vision, which engineers how computers see and make sense of the world, including through camera-tracking and the photogrammetry discussed previously. In Automata I, the OpenCV library (an open-source computer vision code library) was used. Computer vision is comprised of many functions layered on top of each other acting as matrices that filter and analyse images in different ways to make them interpretable by algorithms. Well-known filters are “blob-detection” and “background subtraction”. Simply changing a colour image to greyscale is also an important function within computer vision.

Figure 11 – A greyscale filter shows the algorithmic view of the same landscape and computational data.

Layering these filters onto input images helps to understand the difference between how humans see the world and interpret it and how an algorithm is programmed to see the world and interpret it differently. Reading the code makes it possible to understand the pixel logic at play in the production of a filter, in which each pixel in an image computes its values based on the pixel values around it, producing various matrices that filter information in the image. The well-known “cellular automata” algorithm applies a similar logic; a “Langton’s ant” uses a comparable logic.

A series of simulations were created using a satellite image of a site in the Amazon called the Meeting of Waters, which is the confluence of two rivers, the dark-coloured Rio Negro and the sandy-coloured Amazon River. Each river has different speeds, temperatures and sediments, so the two rivers do not merge but flow alongside each other in the same channel, visibly demarcated by their different colours.

The simulations were created by writing a new set of rules, or pixel logics, to compute the image, which had the effect of “repatterning” it. Analogously, this also appeared to “terraform” the river landscape into a new composition. The simulations switch between the image that the algorithm “sees”, including the information it uses to compute and filter the image, and the image that we see as humans, including the cultural, social and environmental information we use to make sense of it. The visualisation tries to explore the notion of machine vision as a “hyperimage”, an image that is made up of different layers of images that each analyse patterns and relationships between pixels.

Automata II is a series of simulations that continue the research of machine vision techniques established in Automata I. This iteration looks further into how matrices and image analysis combine to support surveillance systems operating on video images. By applying similar pixel rule sets to those used in Automata I, the visualisation shows how the algorithm can detect motion in a video, separating figures in the foreground from the background, leading to surveillance.

Figure 12 – Using the OpenCV code library to detect motion, a function in surveillance systems. Using a video of a chameleon, the analysis is based on similar pixel operations to Automata I.

In another visualisation, a video of a chameleon works analogously to explore how the socio-political function of surveillance emerges from the mathematical abstraction of pixel operations. Chameleons are well-known for their ability to camouflage themselves by blending into their environment (and in many cultures are associated with wisdom). Here the algorithm is programmed to print the pixels when it detects movement in the video and remain black when there is no movement. In the visualisation, the chameleon appears to reveal itself to the surveillance of the algorithm through its motion and camouflage itself from the algorithm through its stillness. An aesthetic contrast is created between an ancient animal captured by innovative technology; however, the chameleon resists the algorithm’s logic to separate background from foreground through its simple embodiment of stillness.

Figure 13. The algorithm was reconfigured to only reveal the pixel operations’ understanding of movement. The chameleon disguises or reveals itself to the surveillance algorithm through its motion.

The work explores the coded gaze of a surveillance camera and how machine vision is situated in society, politically and apolitically, in relation to the peculiarly abstract pixel logics that drive it. Here, visualisation is a reverse-engineering of that coded gaze in order to politically situate source code and code libraries for social and cultural interpretation.

Final Thoughts

Applying critical theory to data practices, including data-driven design and data visualisation, provides a way to interrupt the adherence to the neutral-objective narrative. It offers a way to circulate data practices more justly back into the social, political, ethical, economic, legal and philosophical domains from which they have always derived. The visual techniques presented here, and the ideas about what form a critical data visualisation practice could take, were neither developed in tandem nor sequentially, but by weaving in and out of project developments, exhibition presentations, and writing opportunities over time. Thus, they are not offered as seamless examples but as entry points and options for taking a critical approach to working with data in design. The proposition of situatedness as a territorial, social, and political quality that emerges from decolonial and feminist epistemologies is one pathway in this work. The field of critical data studies, whilst still incipient, is developing a rich discourse that is opportune and constructive for designers, although not immediately associated with visual practice. Situatedness as a critical data visualisation practice has the potential to further engage the forms of technological development interesting to designers with the ethical debates and mobilisations in society today.

References

[1] L. Gitelman, “Raw Data” is an Oxymoron (Cambridge, MA: MIT Press, 2013).

[2] d. boyd and K. Crawford, “Critical Questions for Big Data: provocations for a cultural, technological, and scholarly phenomenon”, Information, Communication & Society 15 5 (2012), 662–79.

[3] R. Kitchen, The Data Revolution: big data, open data, data infrastructures & their consequences (Los Angeles, CA: Sage, 2014).

[4] A. Iliadis and F. Russo, “Critical Data Studies: an introduction”, Big Data & Society 3 2 (2016).

[5] Y. A. Loukissas, All Data are Local: thinking critically in a data-driven world (Cambridge, MA: MIT Press, 2019), 3.

[6] Ibid, 23.

[7] Ibid, 2.

[8] Ibid, 10.

[9] Ibid, 10.

[10] D. Haraway, “Situated Knowledges: the science question in feminism and the privilege of partial perspective”, Feminist Studies 14 3 (1988), 575–99.

[11] S. Harding, “‘Strong objectivity’: A response to the new objectivity question”, Synthese 104 (1995), 331–349.

[12] P. H. Collins, Black Feminist Thought: consciousness and the politics of empowerment (London, UK: HarperCollins, 1990).

[13] C. D’Ignazio and L. F. Klein, Data Feminism (Cambridge, MA: MIT Press, 2020),152.

[14] Ibid, 162.

[15] N. P. Snyder and L. L. Kammer, “Dynamic adjustments in channel width in response to a forced diversion: Gower Gulch, Death Valley National Park, California”, Geology 36 2 (2008), 187–190.

[16] D. Haraway, “Situated Knowledges: the science question in feminism and the privilege of partial perspective”, Feminist Studies 14 3 (1988), 575–99.

Suggest a Tag for this Article
Figures 12 – Planet Garden v.1.
Figures 12 – Planet Garden v.1.
Games and Worldmaking 
consensus reality, games, mediascape, videogames, Virtual, worldmaking
Damjan Jovanovic

damjan@dmjn.net
Add to Issue
Read Article: 3994 Words
Fig. 1 – Planet Garden v.1 screenshot, early game state

Worldmaking  

We live in a period of unprecedented proliferation of constructed, internally coherent virtual worlds, which emerge everywhere, from politics to video games. Our mediascape is brimming with rich, immersive worlds ready to be enjoyed and experienced, or decoded and exploited. One effect of this phenomenon is that we are now asking fundamental questions, such as what “consensus reality” is and how to engage with it. Another effect is that there is a need for a special kind of expertise that can deal with designing and organising these worlds – and that is where architects possibly have a unique advantage. Architectural thinking, as a special case of visual, analogy-based synthetic reasoning, is well positioned to become a crucial expertise, able to operate on multiple scales and in multiple contexts in order to map, analyse and organise a virtual world, while at the same time being able to introduce new systems, rules and forms to it.[1] 

A special case of this approach is something we can name architectural worldmaking,[2] which refers broadly to practices of architectural design which wilfully and consciously produce virtual worlds, and understand worlds as the main project of architecture. Architects have a unique perspective and could have a say in how virtual worlds are constructed and inhabited, but there is a caveat which revolves around questions of agency, engagement and control. Worldmaking is an approach to learning from both technically-advanced visual and cultural formats such as video games, as well as scientific ways of imaging and sensing, in order to be able to construct new, legitimate, and serious ways of seeing and modelling. 

These notions are central to the research seminar called “Games and Worldmaking”, first conducted by the author at SCI-Arc in summer of 2021, which focused on the intersection of games and architectural design, and foregrounded systems thinking as an approach to design. The seminar is part of the ongoing Views of Planet City project, in development at SCI-Arc for the Pacific Standard Time exhibition, which will be organised by the Getty Institute in 2024. In the seminar, we developed the first version of Planet Garden, a planetary simulation game, envisioned to be both an interactive model of complex environmental conditions and a new narrative structure for architectural worldmaking.  

Planet Garden is loosely based on Edward O. Wilson’s “Half-Earth” idea, a scenario where the entire human population of the world occupies a single massive city and the rest is left to plants and animals. The Half Earth is an important and very interesting thought experiment, almost a proto-design, a prompt, an idea for a massive, planetary agglomeration of urban matter which could liberate the rest of the planet to heal and rewild.  

The question of the game was, how could we actually model something like that? How do we capture all that complexity and nuance, how do we figure out stakes and variables and come up with consequences and conclusions? The game we are designing is a means to model and host hugely complex urban systems which unravel over time, while being able to legibly present an enormous amount of information visually and through the narrative. As a format, a simulation presents different ways of imaging the World and making sense of reality through models. 

The work on game design started as a wide exploration of games and precedents within architectural design and imaging operations, as well as abstract systems that could comprise a possible planetary model. The question of models and modelling of systems comes at the forefront and becomes contrasted to existing architectural strategies of representation.

Mythologizing, Representing and Modelling 

Among the main influences of this project were the drawings made by Alexander von Humboldt, whose work is still crucial for anyone with an interest in representing and modelling phenomena at the intersection of art and science.[3] If, in the classical sense, art makes the world sensible while science makes it intelligible, these images are a great example of combining these forms of knowledge. Scientific illustrations, Humboldt once wrote, should “speak to the senses without fatiguing the mind”.[4] His famous illustration of Chimborazo volcano in Ecuador shows plant species living at different elevations, and this approach is one of the very early examples of data visualisation, with an intent of making the world sensible and intelligible at the same time. These illustrations also had a strong pedagogical intent, a quality we wanted to preserve, and which can serve almost as a test of legibility.

Figure 2 – Alexander von Humboldt, Chimborazo volcano.

The project started with a question of imaging a world of nature in the Anthropocene epoch. One of the reasons it is difficult to really comprehend a complex system such as the climate crisis is that it is difficult to model it, which also means to visually represent it in a legible way which humans can understand. This crisis of representation is a well-known problem in literature on the Anthropocene, most clearly articulated in the book Against the Anthropocene, by T.J. Demos.[5] 

We do not yet have the tools and formats of visualising that can fully and legibly describe such a complex thing, and this is, in a way, also a failure of architectural imagination. The standard architectural toolkit is limited and also very dated – it is designed to describe and model objects, not “hyperobjects”. One of the project’s main interests was inventing new modalities of description and modelling of complex systems through the interactive software format, and this is one of the ideas behind the Planet Garden project.  

Contemporary representational strategies for the Anthropocene broadly fall into two categories, those of mythologising or objectivising. The first approach can be observed in the work of photographers such as Edward Burtynsky and Louis Helbig, where the subject matter of environmental disaster becomes almost a new form of the aesthetic sublime. The second strategy comes out of the deployment and artistic use of contemporary geospatial imaging tools. As is well understood by critics, contemporary geospatial data visualisation tools like Google Earth are embedded in a specific political and economic framework, comprising a visual system delivered and constituted by the post–Cold War and largely Western-based military-state-corporate apparatus. These tools offer an innocent-seeming picture that is in fact a “techno-scientific, militarised, ‘objective’ image”.[6] Such an image displaces its subject and frames it within a problematic context of neutrality and distancing. Within both frameworks, the expanded spatial and temporal scales of geology and the environment exceed human and machine comprehension and thus present major challenges to representational systems.  

Within this condition, the question of imaging – understood here as making sensible and intelligible the world of the Anthropocene through visual models – remains, and it is not a simple one. Within the current (broadly speaking) architectural production, this topic is mostly treated through the “design fiction” approach. For example, in the work of Design Earth, the immensity of the problem is reframed through a story-driven, narrative approach which centres on the metaphor, and where images function as story illustrations, like in a children’s book.[7] Another approach is pursued by Liam Young, in the Planet City project,[8] which focuses on video and animation as the main format. In this work, the imaging strategies of commercial science fiction films take the main stage and serve as anchors for the speculation, which serves a double function of designing a new world and educating a new audience. In both cases, it seems, the focus goes beyond design, as these constructed fictions stem from a wilful, speculative exaggeration of existing planetary conditions, to produce a heightened state which could trigger a new awareness. In this sense, these projects serve a very important educational purpose, as they frame the problem through the use of the established and accepted visual languages of storybooks and films.  

The key to understanding how design fictions operate is precisely in their medium of production: all of these projects are made through formats (collage, storybook, graphic novel, film, animation) which depend on the logic of compositing. Within this logic, the work is made through a story-dependent arrangement of visual components. The arrangement is arbitrary as it depends only on the demands of the story and does not correspond to any other underlying condition – there is no model underneath. In comparison, a game such as, for example, SimCity is not a fiction precisely because it depends on the logic of a simulation: a testable, empirical mathematical model which governs its visual and narrative space. A simulation is fundamentally different from a fiction, and a story is not a model. 

This is one of the reasons why it seems important to rethink the concept of design fiction through the new core idea of simulation.[9] In the book Virtual Worlds as Philosophical Tools, Stefano Gualeni traces a lineage of thinking about simulations to Espen Aarseth’s 1994 text called Hyper/Text/Theory, and specifically to the idea of cybertextuality. According to this line of reasoning, simulations contain an element not found in fiction and thus need an ontological category of their own: “Simulations are somewhere between reality and fiction: they are not obliged to represent reality, but they have an empirical logic of their own, and therefore should not be called fictions.”[10] This presents us with a fundamental insight into the use of simulations as the future of architectural design: they model internally coherent, testable worlds and go beyond mere fiction-making into worldmaking proper. 

Simulations, games and systems 

In the world of video games, there exists a genre of “serious” simulation games, which comprises games like Maxis software’s SimCity and The Sims, as well as some other important games like Sid Meier’s Civilization and Paradox Studio’s Stellaris. These games are conceptually very ambitious and extremely complex, as they model the evolution of whole societies and civilisations, operate on very long timescales, and consist of multiple nested models that simulate histories, economies and evolutions of different species at multiple scales. One important feature and obligation of this genre is to present a coherent, legible image of the world, to give a face to the immense complexity of the model. The “user interface” elements of these kinds of games work together to tell a coherent story, while the game world, rendered in full 3D in real time, provides an immersive visual and aesthetic experience for the player. Contrary to almost any other type of software, these interfaces are more indebted to the history of scientific illustration and data visualisation than they are to the history of graphic design. These types of games are open-ended and not bound to one goal, and there is rarely a clear win state.  

Figure 3 – SimEarth main user interface with theGaia window.

Another feature of the genre is a wealth of underlying mathematical models, each providing for the emergence of complexity and each carrying its own assumptions and biases. For example, SimCity is well known (and some would say notorious) for its rootedness in Jay Forrester’s Urban Dynamics approach to modelling urban phenomena, which means that its mathematical model delivers very specific urban conditions – and ultimately, a very specific vision of what a city is and could be.[11] One of the main questions in the seminar became how we might update this approach on two fronts: by rethinking the mathematical model, and by rethinking urban assumptions of the conceptual model. 

The work of the game designer Will Wright, the main designer behind the original SimCity, as well as The Sims and Spore, is considered to be at the origin of simulation games as a genre. Wright has developed a vast body of knowledge on modelling simulations, some of which he presented in his 2003 influential talk at the Game Developers Conference (GDC), titled “Dynamics for Designers”.[12] In this talk, Wright outlines a fully-fledged theory of modelling of complex phenomena for interactivity, focusing on topics such as “How we can use emergence to model larger possibility spaces with simpler components”. Some of the main points: science is a modelling activity, and until now, it has used traditional mathematics as its primary modelling method. This has some limits when dealing with complex dynamic and emergent systems. Since the advent of the computer, simulation has emerged as an alternative way of modelling. These are very different: in Wright’s view, maths is a more linear process, with complex equations; simulation is a more parallel process with simpler components interacting together. Wright also talks about stochastic (random probability distribution) and Monte Carlo (“brute force”) methods as examples of the simulation approach. 

Figure 4 – SimEarth civilisation model with sliders.

Wright’s work was a result of a deep interest in exploring how non-linear models are constructed and represented within the context of interactive video games, and his design approach was to invent novel game design techniques based directly on System Dynamics, a discipline that deals with the modelling of complex, unpredictable and non-linear phenomena. The field has its roots in the cybernetic theories of Norbert Wiener, but it was formalised and created in the mid-1950s by Professor Jay Forrester at MIT, and later developed by Donella H. Meadows in her seminal book Thinking in Systems.[13]  

System dynamics is an approach to understanding the non-linear behaviour of complex systems over time using stocks, flows, internal feedback loops, table functions and time delays.[14,15] Forrester (1918–2016) was an American computer engineer and systems scientist, credited as the founding father” of system dynamics. He started by modelling corporate supply chains and went on to model cities by describing “the major internal forces controlling the balance of population, housing and industry within an urban area”, which he claimed could “simulate the life cycle of a city and predict the impact of proposed remedies on the system”.[16] In the book Urban Dynamics, Forrester had turned the city into a formula with just 150 equations and 200 parameters.[17] The book was very controversial, as it implied extreme anti-welfare politics and, through its “objective” mathematical model, promoted neoliberal ideas of urban planning. 

In another publication, called World Dynamics, Forrester presented “World2”, a system dynamics model of our world which was the basis of all subsequent models predicting a collapse of our socio-technological-natural system by the mid 21st century. Nine months after World Dynamics, a report called Limits to Growth was published, which used the “World3” computer model to simulate the consequences of interactions between the Earth and human systems. Commissioned by the Club of Rome, the findings of the study were first presented at international gatherings in Moscow and Rio de Janeiro in the summer of 1971, and predicted societal collapse by the year 2040. Most importantly, the report put the idea of a finite planet into focus. 

Figure 5 – Jay W. Forrester, World2 model, base for all subsequent predictions of collapse such as Limits to Growth.

The main case study in the seminar was Wright’s 1990 game SimEarth, a life simulation video game in which the player controls the development of a planet. In developing SimEarth, Wright worked with the English scientist James Lovelock, who served as an advisor and whose Gaia hypothesis of planetary evolution was incorporated into the game. Continuing the systems dynamics approach developed for SimCity, SimEarth was an attempt to model a scientifically accurate approximation of the entire Earth system through the application of customised systems dynamics principles. The game modelled multiple interconnected systems and included realistic feedback between land, ocean, atmosphere, and life itself. The game’s user interface even featured a “Gaia Window”, in direct reference to the Gaia theory which states that life plays an intimate role in planetary evolution and the regulation of planetary systems. 

One of the tutorial levels for the SimEarth featured a playable model of Lovelock’s “Daisyworld” hypothesis, which postulates that life itself evolves to regulate its environment, forming a feedback loop and making it more likely for life to thrive. During the development of a life-detecting device for NASA’s Viking lander mission to Mars, Lovelock made a profound observation, that life tends to increase the order of its surroundings, and that studying the atmospheric composition of a planet will provide evidence enough of life’s existence. Daisyworld is a simple planetary model designed to show the long-term effects of coupling and interdependence between life and its environment. In its original form, it was introduced as a defence against criticism that his Gaia theory of the Earth as a self-regulating homeostatic system requires teleological control rather than being an emergent property. The central premise, that living organisms can have major effects on the climate system, is no longer controversial. 

Figure 6 – SimEarth full planetary model.

In SimEarth, the planet itself is alive, and the player is in charge of setting the initial conditions as well as maintaining and guiding the outcomes through the aeons. Once a civilisation emerges, the player can observe the various effects, such as the impacts of changes in atmospheric composition due to fossil fuel burning, or the temporary expansion of ice caps in the aftermath of a major nuclear war. SimEarth’s game box came with a 212-page game manual that was at once a comprehensive tutorial on how to play and an engrossing lesson in Earth sciences: ecology, geology, meteorology and environmental ethics, written in accessible language that anyone could understand.  

Figures 7&8 – Planet Garden simplified model and main game loop.

SimEarth and other serious simulation games in general represent a way that games could serve a function of public education while remaining a form of popular entertainment. This genre also represents an incredible validation of claims that video games can be valuable cultural artifacts. Ian Bogost writes: “This was a radical way of thinking about video games: as non-fictions about complex systems bigger than ourselves. It changed games forever – or it could have, had players and developers not later abandoned modelling systems at all scales in favor of representing embodied, human identities.”[18] 

Lessons that architectural design can learn from these games are many and varied, the most important one being that it is possible to think about big topics by employing models and systems while maintaining an ethos of exploration, play and public engagement. In this sense, one could say that a simulation game format might be a contemporary version of Humboldt’s illustration, with the added benefit of interactivity; but as we have seen, there is a more profound, crucial difference – this format goes beyond just a representation, beyond just a fiction, into worldmaking.  

As a result of this research, the students in the seminar utilised Unreal Engine to create version one (v.1) of Planet Garden, a multi-scalar, interactive, playable model of a self-sustaining, wind and solar-powered robotic garden, set in a desert landscape. The simulation was envisioned as a kind of reverse city builder, where a goal of the game is to terraform a desert landscape by deploying different kinds of energy-producing technologies until the right conditions are met for planting and the production of oxygen. The basic game loop is based on the interaction between the player and four main resources: energy, water, carbon, and oxygen. In the seminar, we also created a comprehensive game manual. The aims of the project were to learn how to model dynamic systems and to explore how game workflows can be used as ways to address urban issues. 

Planet Garden is projected to become a big game for the Getty exhibition; a simulation of a planetary ecosystem as well as a city for 10 billion people. We aim to model various aspects of the planetary city, and the player will be able to operate on multiple spatial sectors and urban scales. The player can explore different ways to influence the development and growth of the city and test many scenarios, but the game will also run on its own, so that the city can exist without direct player input. Our game utilises core design principles that relate to system dynamics, evolution, environmental conditions, and change. A major point is the player’s input and decision-making process, which influence the outcome of the game. The game will also be able to present conditions and consequences of this urban thought experiment, as something is always at stake for the player.  

The core of the simulation-as-a-model idea is that design should have testable consequences. The premise of the project is not to construct a single truthful, total model of an environment but to explore ways of imaging the world through simulation and open new avenues for holistic thinking about interdependence of actors, scales and world systems. If the internet ushered a new age of billions of partial identarian viewpoints, all aggregating into an inchoate world gestalt, is it a time to rediscover a new image of the interconnected world? 

Figure 9 – Planet Garden screenshot, late game state.
Figures 10–16 – Planet Garden v.1.

References

[1] For a longer discussion on this, see O. M. Ungers, City Metaphors, (Cologne: Buchhandlung Walther Konig, 2011). For the central place of analogies in scientific modeling, see M. Hesse, Models and Analogies in Science, and also Douglas Hofstadter, Surfaces and Essences: Analogy as the Fuel and Fire of Thinking (Basic Books, 2013). 

[2] The term “worldmaking” comes from Nelson Goodman’s book Ways of Worldmaking, and is used here to be distinguished from worldbuilding, a more narrow, commercially oriented term. 

[3] For a great introduction to the life and times of Alexander Von Humboldt, see A. Wulf, The Invention of Nature: Alexander von Humboldt’s New World (New York: Alfred A. Knopf, 2015).

[4] Quoted in H. G. Funkhouser, “Historical development of the graphical representation of statistical data”, Osiris 3 (1937), 269–404.

[5] T. J. Demos, Against The Anthropocene (Berlin: Sternberg Press, 2016).

[6] T. J. Demos, Against The Anthropocene (Berlin: Sternberg Press 2016).

[7] Design Earth, Geostories, The Planet After Geoengineering (Barcelona: Actar, 2019 and 2021). 

[8] L. Young, Planet City, (Melbourne: Uro Publications, 2020).

[9] For an extended discussion of the simulation as a format, see D. Jovanovic, “Screen Space, Real Time”, Monumental Wastelands 01, eds. D. Lopez and H. Charbel (2022). 

[10] S. Gualeni, Virtual Worlds as Philosophical Tools, (Palgrave Macmillan, 2015) 

[11] For an extended discussion on this, see Clayton Ashley, The Ideology Hiding in SimCity’s Black Box, https://www.polygon.com/videos/2021/4/1/22352583/simcity-hidden-politics-ideology-urban-dynamics 

[12] W. Wright, Dynamics for Designers, GDC 2003 talk, https://www.youtube.com/watch?v=JBcfiiulw-8.

[13] D. H. Meadows, Thinking in Systems, (White River Junction: Chelsea Green Publishing, 2008). 

[14] Arnaud M., “World2 model, from DYNAMO to R”, Towards Data Science, 2020, https://towardsdatascience.com/world2-model-from-dynamo-to-r-2e44fdbd0975.

[15] Wikipedia, “System Dynamics”, https://en.wikipedia.org/wiki/System_dynamics.

[16] Forrester, Urban Dynamics (Pegasus Communications, 1969).

[17] K. T. Baker, “Model Metropolis”, Logic 6, 2019, https://logicmag.io/play/model-metropolis.

[18] I. Bogost, “Video games Are Better Without Characters”, The Atlantic (2015), https://www.theatlantic.com/technology/archive/2015/03/video-games-are-better-without-characters/387556.

Suggest a Tag for this Article
Figure 1 – Landscapes of Exploitation, Kibali gold mines, Democratic Republic of the Congo.
Figure 1 – Landscapes of Exploitation, Kibali gold mines, Democratic Republic of the Congo.
MIGRATING LANDSCAPES 
ALGORITHMIC VISION, MEDIA ECOLOGIES, MIGRATING LANDSCAPES, REPRESENTATION, TOKENISATION
Tanya Mangion, Michiel Helbig, Corneel Cannaerts

tanyamangion95@gmail.com
Add to Issue
Read Article: 3096 Words

MEDIA ECOLOGIES 

Our collective consciousness of climate change is an accomplishment of the vast apparatus of computational technologies for capturing, processing and visualising increasing amounts of data produced by earth observation technologies, satellite imaging, and remote sensing. These technologies establish novel ways of sensing and understanding our world, extending human visual cultures in scale, time and spectral capacities. The gathered data is synthesised in increasingly complex models and simulations that afford highly dynamic visualisations of climate events as they unfold and envision near future scenarios. The images resulting from this technical vision and cognition render the artificial abstraction comprehensible and are essential in developing the notion of climate change and attempts to mitigate its effects.[1]  

The artificial abstraction introduced through this planetary apparatus is reflected in the naming of the Anthropocene, as the contemporary geological epoch, prompted by humanity’s lasting impact on our planet.[2] The naming has been criticised for its anthropocentrism, i.e. putting the human once again at the centre, and for depoliticising and de-territorialising climate change, casting the whole of humanity as equally responsible for environmental crises, disregarding substantial regional and societal differences. Several alternatives have been formulated in critique of the term: Capitalocene,[3] highlighting the devastating role of capitalism in climate change, or Plantationocene,[4] stressing the ongoing inequalities resulting from colonialism and slave labour. While acknowledging these terms, Donna Haraway proposes the term Chthulucene, introducing multispecies stories and practices, mythologies, and tentacular narratives to avoid anthropocentrism and reductionism, providing room for more than human agency.[5] 

The framing of climate crises within human-centred, depoliticised, technocratic discourse is also strongly critiqued from cultural practices in the arts, design and media.[6] The top-down, analytical point of view afforded through scientific observation, visualisation and prediction is increasingly being complemented by documentary, eyewitness and on-the-ground reports of the impact of climate change. Images captured through the plethora of cell phone and other cameras, data logging, image sharing and social media produce a constantly updating stream of images and data on climate change. Digital media ecologies, the assemblages of hardware, software and content of digital media within our environment, play an important role in addressing climate change.[7] Whether it is through the repurposing of the scientific apparatus and technologies for observation and visualisation or the ubiquitous use of personal devices and social media, computational images have become significant cultural media artefacts that can be used to develop more narrative and fictional imaginaries of environmental crises. 

Landscapes are defined as both natural and human-made environments, as well as their depiction in media such as painting, photography and film. Even as environments, landscapes are a physical and multi-sensory medium in which cultural meanings and values are encoded. Landscapes operate through the visual; i.e. a landscape is what can be seen from a certain vantage point, and implies an active spectator. As a verb, landscaping indicates acting on the environment, through manipulating its material features, erasing or adding elements. Both as environment and as media, landscapes are inextricably entangled with capital and power, whether exploited through extracting resources, consumed as an experience through tourism and real estate, or mediated and commodified as an artefact. In Landscape and Power, Mitchell indicates a landscape as a medium; an area of land is only considered a landscape from the moment one perceives it to be as such, through attached meanings, as artificial-cultural, political and social constructs.[8] The recent climate crises and the emergence of digital media ecologies require us to rethink this implicit human-centred notion of landscape and extend it to include non-human, animal and machine agencies.[9] As such, landscapes are an interesting lens through which to look at the blurring between the natural and the cultural, human and non-human agency, and the mediated and bodily experiences of environments.  

Figure 1 – Landscapes of Exploitation, Kibali gold mines, Democratic Republic of the Congo. 

MIGRATING LANDSCAPES 

The dissertation project “Migrating Landscapes” by Tanya Mangion is framed within the ideas outlined above, it explores landscapes as both environment and media, inextricably entangled with capital and power.[10] The project speculates on landscapes gaining agency through a decentralised autonomous organisation (DAO),[11] that can interact on behalf of the landscape with human agencies – individuals, governments, legal entities, financial systems… Once established, the DAO operates on the blockchain and can operate without human interference as regulated through smart contracts. Governance of the DAO is regulated through tokens, which fractionalise stewardship, but cannot act against the interest of the landscape as encoded by the DAO. 

This speculative scenario questions what role architecture could play when engaged by a DAO that represents the interests of exploited landscapes. How do architects design for this non-human agency? What strategies could architects develop to engage landscapes beyond the habitual ways of looking at them as resources to be excavated, sites to be developed? What novel languages, tools and protocols would architects need to develop in order to take up this role? Rather than attempting to find definite answers to these questions, they instead form the drivers for developing a speculative design project.  

The architectural toolbox seems ill-equipped to deal with the large timeframes and scales that migrating landscapes operate on. In order to begin to address these questions we might extend the architectural toolbox with technologies such as earth observation, satellite imagery, data mining, sensor arrays… The role of the architect could be to repurpose the high-tech apparatus and data from scientific observations of climate change, and turn them into speculative design narratives and imaginaries on migrating landscapes. Using media ecology and algorithmic vision the project highlights issues and landscapes that deserve attention, and launches a call to architects who wish to engage with it. Data collection from available data sets including time-based, satellite, terrain and eyewitness data could be used to rebuild a cohesive image of exploited landscapes, using narrative media combined with conventional architectural processes. Injecting the image of the landscapes back into media ecology would generate a feedback loop that would go on to bring about changes in human behaviour in regard to the landscape both as media and environment, the latter occurring over a longer time frame. 

The speculative design project explores this potential through different aspects: starting with the use of algorithmic vision to analyse landscapes, then giving an overview of the various phases of the development of a DAO, exploring a tokenisation shift from a fungible to a non-fungible valuation of landscapes, representation of landscapes in media ecology and demonstrating how architecture could be used to engage an audience. 

ALGORITHMIC VISION 

Computational visual tools allow architects novel ways of understanding, mapping and visualising landscapes. The combination of multiple data sets provides a more densely mediated version of a landscape. Satellites can pick up the image of a landscape and, when combined with terrain data, mapping platforms provide a data-rich and layered representation of the landscape. While mapping services, like Google maps or GIS, are presented as neutral media, they are entangled with commercial, military and political interests,[12] not only in the technologies used for capturing data but also in its visualisation – as is demonstrated by the absence of data for certain territories, differences in resolution, or the deliberate blurring of specific sites.[13] 

Satellite imagery is not limited to capturing bands of the spectrum visible to human eyes; by combining several bands they can provide insights into vegetation, elevation, refraction, moisture, temperature… The resulting multi-band images can be considered synthetic artificial artefacts as they are assembled by algorithms. They remain largely invisible to humans, and are reduced to mediating information and data flows, as they “do not represent an object, but rather are part of an operation”.[14] Depending on the capturing sensor, information is sampled at discrete intervals, introducing resolution ranging from a hundred metres to fifteen centimetres. Depending on the number of satellites and their operation, the images have a certain refresh rate, giving us the ability to visit time progressions within the landscapes. These freeze-framed images of landscapes provide us with information or proof of interventions that occurred within the territory over time.[15] 

Figure 2 – Satellite bands from Sentinel Application Platform (SNAP), B8, infrared, natural colour. 

The landscapes in the project were the result of human-centric actions like resource extraction, as demonstrated at one of the largest gold mines in the Democratic Republic of the Congo. In addition to satellite images, a virtual field trip of sorts allowed a journey through the data-sphere of the landscapes concerned. This led to extraction performed on different levels; data extraction from photo-sharing platforms was used to investigate the image of the landscapes within the limitations of its geolocation. Another data extraction was performed to explore the fungible asset within the landscape, resulting in a plethora of data, exploring the appropriation of the asset within our culture. Through a process of data scraping, deduction and fragmentation, a series of reconstructions of landscapes were produced during the project. These reconstructed landscapes link material flows from extraction to consumption – of, for instance, gold – and are published again through social media in an attempt to reveal the material sources of familiar consumer objects.[16] Gold was a remarkable mineral to start off with due to its use as a federal resource, keeping economies stable by functioning as a hedge against inflation, as well as its significance in history and popular culture.[17] 

Figure 3 – Zoomable map of the Kibali gold mines, Democratic Republic of the Congo (press space to change layers).

TOKENISATION 

When excavating landscapes for minerals, they are valued for their interchangeable or fungible material properties, for example the amount of gold they contain. Once extracted, each gram of gold is valued the same, regardless of where on the planet it has been mined. Whereas if one goes for a hike, for instance, or looks at landscape painting or photography, specific features of the landscapes, slopes or mountain peaks provide unique experiences; i.e., they are not interchangeable, they are non-fungible. In both these scenarios, the fungible exploitation of landscapes for resource extraction and the non-fungible experience of landscapes, mediated or otherwise, the landscape is passive and does not have agency. 

Figure 4 – Tokenisation of the landscape though mesh triangulation. 

The project proposes tokenisation of the non-fungible aspects of the landscape, controlled by a DAO, allowing collective stewardship of the landscape. This is to be achieved through appropriating tools from earth observation to build a mesh representation of the landscape. Each triangle of the mesh represents a unique, non-fungible fractional token of the landscape – in contrast to a voxel representation, which could be seen as representing the fungible exploitation of the landscape. This data allows an understanding, on a large scale, of fluxes within the landscape, and detects changes unseen to the human eye. Additionally, this data also offers the possibility to autonomise landscapes as DAO systems and thereby give them agency. The DAO operates transparently & independently of human intervention, including that of its creators. Based on a collection of smart contracts running on blockchain technology, it has the ability to garner capital, with automation at its centre and humans at the edges to manage, protect and promote its agency.[18] 

Figure 5  – Voxelisation and triangulation representing fungible and non-fungible discretisation of the landscape.
Figure 5 – Voxelisation and triangulation representing fungible and non-fungible discretisation of the landscape.

REPRESENTATION 

There is a role for architects here, to become engaged to map and visualise the DAO’s non-fungible entities. The architect has the tools to change the representation of landscapes, raising awareness of environmental evolution, generating behavioural changes and, over a longer timescale, impacting the environment itself. However, representation alone is not enough to communicate the sheer scale of these landscapes; the project proposes to map the exploited landscapes on the scale of urban environments, and build interventions in the form of pavilions to raise awareness of the landscapes. This serves to communicate the scale of material displacement of exploited landscapes such as mines within urban environments; commonly being the final destination for material flows, creating conversation and the possibility of engagement between the DAO and the human, with the latter generally being distanced from the reality of material displacement. This act brings the idea of tokenised landscapes to large audiences and allows for human engagement and participation within the DAO as shareholders.  

Figure 6 – 1:1 Visual representation of a physical intervention of part of the Kibali Gold mines within the urban environment of Ghent, Belgium. 

The role of the architect engaged by the DAO is to map and visualise the landscape’s assets, fractionalising it using algorithmic visual tools, and using architectural representations that can be minted as non-fungible tokens. The presence of these tokens on social media and through interventions within physical public spaces in cities aims, in the short term, to raise awareness of the vast scale of these landscapes of exploitation, and to change behaviours and allow for engagement and participation within the DAO as token holders. In the long term, this will start to affect the physical conditions of these landscapes themselves, as they no longer rely on selling their fungible, non-renewable material assets. This could lead to rewilding and restoring of vegetation – and potentially to their being traded as carbon sinks.[19] 

Although token holders should be preserving the non-fungibility of the landscapes, returning to the argument that nature is ultimately defeated by its utility, the next step would be to remove the human from the system completely, merging the biosphere and technosphere. There is still a chance of a “51% attack”; meaning shareholders could agree to overturn an agreement within the smart contract. To prevent this, the system could opt for full autonomy, which it could achieve over a longer timescale. Garnering capital through non-fungible tokens – of its image – could also be a possibility, and would potentially affect and accelerate the timescale of the process.  

Figure 7  – Leveraging social media to share images of the tokenised landscape
Figure 7 – Leveraging social media to share images of the tokenised landscape.

DISCUSSION  

Migrating Landscapes can be viewed as a concept that traces material flows through the use of algorithmic technologies not typically used within architecture, to explore how landscapes, non-human agents, can become autonomous. In the case of this dissertation project, the framework of a DAO was used to transform landscapes as media into non-fungible tokens, allowing the landscapes to stop being exploited and gain agency. What other technologies or tools could architects use to create compelling visual narratives, to engage with audiences and enable autonomy to non-human agents? Within the context of media ecology and algorithmic vision this was one response; considering the plethora of devices and data-gathering techniques that already exist and are still being created, the likelihood of autonomy for non-humans is ever more likely. 

The project does not propose a techno-solutionist approach, where we can engineer ourselves out of wicked problems caused by climate change. Rather, it proposes to use these technologies for their compelling visual, imaginary and narrative qualities, to make migrating landscapes and their non-human agency more relatable. The DAO as a system ultimately acts as a driving force for landscapes to “migrate”, becoming new entities and modifying our relationships and attitudes towards them. The system is allowing for these otherwise unseen landscapes to both establish presence within our media ecologies and to become located within our consciousness in this contemporary age. The changes it would instil are yet to be discovered. 

Acknowledgement

This paper reflects on the dissertation project “Migrating Landscapes” by Tanya Mangion that was developed in response to the studio brief “Algorithmic Vision: Architecture and Media Ecologies” of Fieldstation Studio at KU Leuven Faculty of Architecture. The project speculates on landscapes gaining agency through a decentralised autonomous organisation that can interact on behalf of the landscape with human agencies. Through reappropriating technologies for algorithmic vision, landscapes could turn their unique features into non-fungible tokens, allowing them to stop being exploited and gain agency.

Fieldstationstudio.org | https://www.instagram.com/migrating.landscapes/ 

References 

[1] B. Bratton, The Terraforming (Moscow: Strelka, 2019), 19.

[2] P. Crutzen and E. Stoermer, “The ‘Anthropocene’”, Global Change Newsletter, International Geosphere-Biosphere Program Newsletter, no. 41 (May 2000), 17–18; Crutzen, “Geology of Mankind”, Nature 415 (2002), 23; J. Zalasiewicz et al., “Are We Now Living in the Anthropocene?” GSA (Geophysical Society of America) Today vol. 18, no. 2 (2008), 4–8. 

[3] The origin of this term is not entirely clear, but is discussed at length here: https://www.e-flux.com/journal/75/67125/tentacular-thinking-anthropocene-capitalocene-chthulucene.

[4] J. Davis, A. Moulton, L. Van Sant, B. Williams, “Anthropocene, Capitalocene, … Plantationocene?: A Manifesto for Ecological Justice in an Age of Global Crises” Geography Compass, Volume 13, Issue 5, 2019). 

[5] D. Haraway, “Tentacular Thinking: Anthropocene, Capitalocene, Chthulucene”, Eflux Journal, Issue 75, September 2016. 

[6] T. J. Demos, Against the Anthropocene: Visual Culture and Environment Today (MIT Press, 2017). 

[7] S. Taffel, Digital Media Ecologies: Entanglements of Content, Code and Hardware (Bloomsbury Academic, 2019). 

[8] W. T J. Mitchell, Landscapes and Power (Chicago: University of Chicago Press, 1994), 15. 

[9]  L. Young, Machine Landscapes: Architectures of the Post Anthropocene (London: Wiley). 

[10] See http://www.fieldstationstudio.org/STUDIO/ALGORITHMIC_VISION.

[11] The notion and implementation of a DAO was published by Christoph Jentzsch in the DAO white paper in 2016, see https://blog.slock.it/the-history-of-the-dao-and-lessons-learned-d06740f8cfa5.

[12] These dimensions were discussed during the Vertical Atlas – world.orbit at the Nieuw Instituut Rotterdam in 2020, see https://verticalatlas.hetnieuweinstituut.nl/en/activities/vertical-atlas-worldorbit.

[13] “Resolution Frontier” by Besler and Sons, 2018 see  https://www.beslerandsons.com/projects/resolution-frontier.

[14] E. Thomas, H. Farocki, Working on the Sightlines (Amsterdam: Amsterdam University Press, 2004). 

[15] A toolkit for satellite imagery has been compiled by Andrei Bocin Dumitriu, for the Vertical Atlas – world.orbit project, see https://brainmill.wixsite.com/worldorbit.

[16]  K. Davies, L. Young, Never Never Lands: Unknown Fields (London: AA publishing, 2016).

[17] In Extraction Models and along with Weronika Gajda the exploration of gold as a resource was explored further within the context of New York City’s federal reserve, see  https://www.instagram.com/extraction.models.

[18] This idea was developed by terra0 in: P. Seidler, P. Kolling, M. Hampshire, “Can an augmented forest own and utilise itself?”, white paper, Berlin University of the Arts, Germany, May 2016, https://terra0.org.

[19] There are several projects that propose NFTs as carbon sinks, see https://carbonsink-nfts.com/ and https://nftree.org.

Suggest a Tag for this Article
Subscribe To Prospectives To Automatically Receive Curated Issues By Our Advisory Board Twice A Year!

£30