Search
Mailing List
Back to Top
Issue 55 G
31/07/2024
ISSN 2634-8578
Curated By:
C G
Columbia University
Data Feminism, Situatedness
Add to Basket
Share →
Figure 10 – A satellite image of the Meeting of Waters in the Amazon region in Brazil. The original image shows the confluence of two rivers that flow together but do not mix. Pixel operations driven by agents change the composition of the landscape.
Figure 10 – A satellite image of the Meeting of Waters in the Amazon region in Brazil. The original image shows the confluence of two rivers that flow together but do not mix. Pixel operations driven by agents change the composition of the landscape.
Situatedness: A Critical Data Visualisation Practice
03/08/2022
Critical Practice, Data Feminism, Data Visualisation, Decolonisation, Situatedness
Catherine Griffiths

catgriff@umich.edu
Add to Issue
Read Article: 5495 Words

Data and its visualisation have been an important part of architectural design practice for many years, from data-driven mapping to building information modelling to computational design techniques, and now through the datasets that drive machine-learning tools. In architectural design research, data-driven practices can imbue projects with a sense of scientific rigour and objectivity, grounding design thinking in real-world environmental phenomena.

More recently, “critical data studies” has emerged as an influential interdisciplinary discourse across social sciences and digital humanities that seeks to counter assumptions made about data by invoking important ethical and socio-political questions. These questions are also pertinent for designers who work with data. Data can no longer be used as a raw and agnostic input to a system of analysis or visualisation without considering the socio-technical system through which it came into being. Critical data studies can expand and deepen the practice of working with data, enabling designers to draw on pertinent ideas in the emerging landscape around data ethics. Data visualisation and data-driven design can be situated in more complex creative and critical assemblages. This article draws on several ideas from critical data studies and explores how they could be incorporated into future design and visualisation projects.

Critical Data Studies

The field of critical data studies addresses data’s ethical, social, legal, economic, cultural, epistemological, political and philosophical conditions, and questions the singularly scientific empiricism of data and its infrastructures. By applying methodologies and insights from critical theory, we can move beyond a status quo narrative of data as advancing a technical, objective and positivist approach to knowledge.

Historical data practices have promoted false notions of neutrality and universality in data collection, which has led to unintentional bias being embedded into data sets. This recognition that data is a political space was explored by Lisa Gitelman in “Raw Data” Is an Oxymoron, in which she argues that data does not exist in a raw state, such as a natural resource, but is always undergoing a process of interpretation.[1] The rise of big data is a relatively new phenomenon. Data harvested from extensive and nuanced facets of people’s lives signifies a shift in how we approach the implications for power asymmetries and ethics. This relationship between data and society is tied together through critical data studies.

The field emerged from the work of Kate Crawford and danah boyd, who in 2012 formulated a series of critical provocations given the rise of big data as an imperious phenomenon, highlighting its false mythologies.[2] Rob Kitchen’s work has appraised data and data science infrastructures as a new social and cultural territory.[3] Andrew Iliadis and Federica Russo use the theory of assemblages to capture the multitude of ways that already-composed data structures inflect and interact with society.[4] These authors all seek to situate data in a socio-technical framework from which data cannot be abstracted. For them, data is an assemblage, a cultural text, and a power structure that must be available for interdisciplinary interpretation.

Data Settings and Decolonisation

Today, with the increasing access to large data sets and the notion that data can be extracted from almost any phenomena, data has come to embody a sense of agnosticism. Data is easily abstracted from its original context, ported to somewhere else, and used in a different context. Yanni Loukissas is a researcher of digital media and critical data studies who explores concepts of place and locality as a means of critically working with data. He argues that “data have complex attachments to place, which invisibly structure their form and interpretation”.[5] Data’s meaning is tied to the context from which it came. However, the way many people work with data today, especially in an experimental context, assumes that the origin of a data set does not hold meaning and that data’s meaning does not change when it is removed from its original context.

In fact, Loukissas claims, “all data are local”, and the reconsideration of locality is an important critical data tactic.[6] Asking where data came from, who produced it, when, and why, what instruments were used to collect it, what kind of conditioned audience was it intended for, and how might these invisible attributes inform its composition and interpretation are all questions that reckon with a data set’s origin story. Loukissas proposes “learning to analyse data settings rather than data sets”.[7] The term “data set” evokes a sense of the discrete, fixed, neutral, and complete, whereas the term “data setting” counters these qualities and awakens us to a sense of place, time, and the nuances of context.

From a critical data perspective, we can ask why we strive for the digital and its data to be so place-agnostic, a totalising system of norms that erases the myriad of cultures? The myth of placelessness in data implies that everything can be treated equally by immutable algorithms. Loukissas concludes, “[o]ne reason universalist aspirations for digital media have thrived is that they manifest the assumptions of an encompassing and rarely questioned free market ideology”.[8] We should insist upon data’s locality and multiple and specific origins to resist such an ideology.

“If left unchallenged, digital universalism could become a new kind of colonialism in which practitioners at the ‘periphery’ are made to conform to the expectations of a dominant technological culture.

If digital universalism continues to gain traction, it may yet become a self-fulfilling prophecy by enforcing its own totalising system of norms.”[9]

Loukissas’ incorporation of place and locality into data practices comes from the legacy of postcolonial thinking. Where Western scientific knowledge systems have shunned those of other cultures, postcolonial studies have sought to illustrate how all knowledge systems are rooted in local- and time-based practices and ideologies. For educators and design practitioners grappling with how to engage in the emerging discourse of decolonisation in pedagogy, data practices and design, Loukissas’ insistence on reclaiming provenance and locality in the way we work with abstraction is one way into this work.

Situated Knowledge and Data Feminism

Feminist critiques of science have also invoked notions of place and locality to question the epistemological objectivity of science. The concept of situated knowledge comes from Donna Haraway’s work to envision a feminist science.[10] Haraway is a scholar of Science and Technology Studies and has written about how feminist critiques of masculinity, objectivity and power can be applied to the production of scientific knowledge to show how knowledge is mediated by and historically grounded in social and material conditions. Situated knowledge can reconcile issues of positionality, subjectivity, and their inherently contestable natures to produce a greater claim to objective knowledge, or what Sarah Harding has defined as “strong objectivity”.[11] Concepts of situatedness and strong objectivity are part of feminist standpoint theory. Patricia Hill Collins further proposes that the intersectional marginalised experiences of women and minorities – black women, for example – offer a distinctive point of view and experience of the world that should serve as a source for new knowledge that is more broadly applicable.[12]

How can we take this quality of situatedness from feminist epistemology and apply it to data practices, specifically the visualisation of data? In their book Data Feminism, Catherine D’Ignazio and Lauren Klein define seven principles to apply feminist thinking to data science. For example, principle six asks us to “consider context” when making sense of correlations when working with data.

“Rather than seeing knowledge artifacts, like datasets, as raw input that can be simply fed into a statistical analysis or data visualisation, a feminist approach insists on connecting data back to the context in which they were produced. This context allows us, as data scientists, to better understand any functional limitations of the data and any associated ethical obligations, as well as how the power and privilege that contributed to their making may be obscuring the truth.”[13]

D’Ignazio and Klein argue that “[r]efusing to acknowledge context is a power play to avoid power. It is a way to assert authoritativeness and mastery without being required to address the complexity of what the data actually represent”.[14] Data feminism is an intersectional approach to data science that counters the drive toward optimisation and convergence in favour of addressing the stakes of intersectional power in data.

Design Practice and Critical Data Visualisation

The visualisation of data is another means of interpreting data. Data visualisation is part of the infrastructure of working with data and should also be open to critical methods. Design and visualisation are processes through which data can be treated with false notions of agnosticism and objectivity, or can be approached critically, questioning positionality and context. Even when data practices explore creative, speculative, and aesthetic-forward techniques, this can extend and enrich the data artefacts produced. Therefore, we should critically reflect on the processes and infrastructures through which we design and aestheticise data.

How can we take the concept of situatedness that comes out of critical data studies and deploy it in creative design practice? What representational strategies support thinking through situatedness as a critical data practice? Could we develop a situated data visualisation practice?

The following projects approach these questions using design research, digital humanities and critical computational approaches. They are experiments that demonstrate techniques in thinking critically about data and how that critique can be incorporated into data visualisation. The work also expands upon the visualisation of data toward the visualisation of computational processes and software infrastructure that engineer visualisations. There is also a shift between exploring situatedness as a notion of physical territory toward a notion of socio-political situatedness. The following works all take the form of short films, animations and simulations.

Alluvium

Figure 1 – A situating shot of the Gower Gulch site, to capture both scales of assessment: wide-angle photography shows the geomorphological consequences of flood water on the landscape, whilst macro photography details the granular role of sedimentation.

Cinematic data visualisation is a practice of visually representing data. It incorporates cinematic aesthetics, including an awareness of photography’s traditional aspects of framing, motion and focus, with contemporary virtual cinematography’s techniques of camera-matching and computer-generated graphics. This process intertwines and situates data in a geographic and climatic environment, and retains the data’s relationship with its source of origin and the relevance that holds for its meaning.

As a cinematic data visualisation, Alluvium presents the results of a geological study on the impact of diverted flood waters on a sediment channel in Death Valley, California. The scenes took their starting point from the research of Dr Noah Snyder and Lisa Kammer’s 2008 study.[15] Gower Gulch is a 1941 diversion of a desert wash that uncovers an expedited view of geological changes that would normally have taken thousands of years to unfold but which have evolved at this site in several decades due to the strength of the flash floods and the conditions of the terrain.

Gower Gulch provides a unique opportunity to see how a river responds to an extreme change in water and sediment flow rates, presenting effects that could mimic the impact of climate change on river flooding and discharge. The wash was originally diverted to prevent further flooding and damage to a village downstream; today, it presents us with a microcosm of geological activity. The research paper presents data as historical water flow that can only be measured and perceived retrospectively through the evidence of erosion and sediment deposition at the site.

Figure 2 – A situated visualisation combining physical cinematography and virtual cinematography to show a particle simulation of flood waters. 

Alluvium’s scenes are a hybrid composition of film and digitally produced simulations that use the technique of camera-matching. The work visualises the geomorphological consequences of water beyond human-scale perception. A particle animation was developed using accurate topographic models to simulate water discharge over a significant period. Alluvium compresses this timeframe, providing a sense of a geological scale of time, and places the representation and simulation of data in-situ, in its original environment.

In Alluvium, data is rendered more accessible and palpable through the relationship between the computationally-produced simulation of data and its original provenance. The data’s situatedness takes place through the way it is embedded into the physical landscape, its place of origin, and how it navigates its source’s nuanced textures and spatial composition.

The hybridised cinematic style that is produced can be deconstructed into elements of narrative editing, place, motion, framing, depth of field and other lens-based effects. The juxtaposition of the virtual and the real through a cinematic medium supports a recontextualisation of how data can be visualised and how an audience can interpret that visualisation. In this case, it is about geographic situatedness, retaining the sense of physical and material qualities of place, and the particular nuances of the historical and climatic environment.

Figure 3 – The velocity of the particles is mapped to their colouration, visualising water’s characteristic force, directionality and turbulence. The simulation is matched to a particular site of undercut erosion, so that the particles appear to carve the physical terrain.

Death Valley National Park, situated in the Mojave Desert in the United States, is a place of extreme conditions. It has the highest temperature (57° Celsius) and the lowest altitude (86 metres below sea level) to be recorded in North America. It also receives only 3.8 centimetres of rainfall annually, registering it as North America’s driest place. Despite these extremes, the landscape has an intrinsic relationship with water. The territorial context is expressed through the cinematic whilst also connecting the abstraction of data to its place of origin.

For cinematic data visualisation, these elements are applied to the presentation of data, augmenting it into a more sensual narrative that loops back to its provenance. As a situated practice, cinematic data visualisation foregrounds a relationship with space and place. The connection between data and the context from which it was derived is retained, rather than the data being extracted, abstracted, and agnostically transferred to a different context in which site-specific meaning can be lost. As a situated practice, cinematic data visualisation grapples with ways to foreground relationships between the analysis and representation of data and its environmental and local situation.

LA River Nutrient Visualization

Figure 4 – Reconstruction of the site of study, the Los Angeles River watershed from digital elevation data, combined with nutrient data from river monitoring sites.

Another project in the same series, the LA River Nutrient Visualization, considers how incorporating cinematic qualities into data visualisation can support a sense of positionality and perspective amongst heterogeneous data sets. This can be used to undermine data’s supposed neutrality and promote an awareness of data containing various concerns and stakes of different groups of people. Visualising data’s sense of positionality and perspective is another tactic to produce a sense of situatedness as a critical data visualisation practice. Whilst the water quality data used in this project appeared the same scientifically, it was collected by different groups: locally organised communities versus state institutions. The differences in why the data was collected, and by whom, have a significance, and the project was about incorporating that in the representational strategy of data visualisation.

This visualisation analyses nutrient levels, specifically nitrogen and phosphorus, in the water of the Los Angeles River, which testify to pollution levels and portray the river’s overall health. Analysed spatially and animated over time, the data visualisation aims to provide an overview of the available public data, its geographic, seasonal and annual scope, and its limitations. Three different types of data were used: surface water quality data from state and national environmental organisations, such as the Environmental Protection Agency and the California Water Science Center; local community-organised groups, such as the River Watch programme by Friends of the Los Angeles River and citizen science group Science Land’s E-CLAW project; and national portals for remotely-sensed data of the Earth’s surface, such as the United States Geological Survey.

The water quality data covers a nearly-50-year period from 1966 to 2014, collected from 39 monitoring stations distributed from the river’s source to its mouth, including several tributaries. Analysis showed changes in the river’s health based on health department standards, with areas of significantly higher concentrations of nutrients that consistently exceeded Water Quality Objectives.

Figure 5 – Virtual cameras are post-processed to add lens-based effects such as shallow depth of field and atmospheric lighting and shadows. A low, third-person perspective is used to position the viewer with the data and its urban context.

The water quality data is organised spatially using a digital elevation model (DEM) of the river’s watershed to create a geo-referenced 3D terrain model that can be cross-referenced with any GPS-associated database. A DEM is a way of representing remotely-captured elevation, geophysical, biochemical, and environmental data about the Earth’s surface. The data itself is obtained by various types of cameras and sensors attached to satellites, aeroplanes and drones as they pass over the Earth.

Analysis of the water data showed that the state- and national-organised data sets provided a narrow and inconsistent picture of nutrient levels in the river. Comparatively, the two community-organised data sets offered a broader and more consistent approach to data collection. The meaning that emerged in this comparison of three different data sets, how they were collected, and who collected them ultimately informed the meaning of the project, which was necessary for a critical data visualisation.

Visually, the data was arranged and animated within the 3D terrain model of the river’s watershed and presented as a voxel urban landscape. Narrative scenes were created by animating slow virtual camera pans within the landscape to visualise the data from a more human, low, third-person point of view. These datascapes were post-processed with cinematic effects: simulating a shallow depth of field, ambient “dusk-like” lighting, and shadows. Additionally, the computer-generated scenes were juxtaposed with physical camera shots of the actual water monitoring sites, scenes that were captured by a commercial drone. Unlike Alluvium, the two types of cameras are not digitally matched. The digital scenes locate and frame the viewer within the data landscape, whereas physical photography provides a local geographic reference point to the abstracted data. This also gives the data a sense of scale and invites the audience to consider each data collection site in relation to its local neighbourhood. The representational style of the work overall creates a cinematic tempo and mood, informing a more narrative presentation of abstract numerical data.

Figure 6 – Drone-captured aerial video of each data site creates an in-situ vignette of the site’s local context and puts the data back into communication with its local neighbourhood. This also speaks to the visualisation’s findings that community organisation and citizen science was a more effective means of data collection and should be recognised in the future redevelopment of the LA River.

In this cinematic data visualisation, situatedness is engaged through the particular framing and points of view established in the scenes and through the juxtaposition of cinematography of the actual data sites. Here, place is social; it is about local context and community rather than a solely geographical sense of place. Cinematic aesthetics convey the “data setting” through a local and social epistemic lens, in contrast to the implied frameless and positionless view with which state-organised data is collected, including remotely-sensed data.

All the water data consisted of scientific measurements of nitrogen and phosphorus levels in the river. Numerically, the data is uniform, but the fact that different stakeholders collected it with different motivations and needs affects its interpretation. Furthermore, the fact of whether data has been collected by local communities or state institutions informs its epistemological status concerning agency, motivation, and environmental care practices.

Context is important to the meaning that the data holds, and the visualisation strategy seeks to convey a way to think about social and political equity and asymmetry in data work. The idea of inserting perspective and positionality into data is an important one. It is unusual to think of remotely-sensed data or water quality data as having positionality or a perspective. Many instruments of visualisation present their artefacts as disembodied. Remotely-sensed data is usually presented as a continuous view from everywhere and nowhere simultaneously. However, feminist thinking’s conception of situated knowledge asks us to remember positionality and perspective to counter the sense of framelessness in the traditional tools of data collection and analysis.

Cinema for Robots

Figure 7 – A point cloud model of the site underneath the Colorado Street Bridge in Pasadena, CA, showing a single camera position from the original video capture.

Cinema for Robots was the beginning of an exploration into the system that visualises data, rather than data visualisation itself being the outcome. Cinema For Robots presents a technique to consider how to visualise computational process, instead of presenting data as only a fixed and retrospective artefact. The project critically investigates the technique of photogrammetry, using design to reflexively consider positionality in the production of a point cloud. In this case, the quality of situatedness is created by countering the otherwise frameless point cloud data visualisation with animated recordings of the body’s position behind the camera that produced the data.

Photogrammetry is a technique in which a 3D model is computationally generated from a series of digital photographs of a space (or object). The photographs are taken systematically from many different perspectives and overlapping at the edges, as though mapping all surfaces and angles of the space. From this set of images, an algorithm can compute an accurate model of the space represented in the images, producing a point cloud. In a point cloud, every point has a 3D coordinate that relates to the spatial organisation of the original space. Each point also contains colour data from the photographs, similarly to pixels, so the point cloud also has a photographic resemblance. In this project, the point cloud is a model of a site underneath the Colorado Street Bridge in Pasadena, California. It shows a mixture of overgrown bushes and large engineered arches underneath the bridge.

Figure 8 – A perspective of the bridge looking upwards with two camera positions that animate upwards in sync with the video.

The image set was created from a video recording of the site from which still images were extracted. This image set was used as the input for the photogrammetry algorithm that produced the point cloud of the site. The original video recordings were then inserted back into the point cloud model, and their camera paths were animated to create a reflexive loop between the process of data collection and the data artefact it produced.

With photogrammetry; data, computation, and its representation are all entangled. Similarly to remotely-sensed data sets, the point cloud model expresses a framelessness, a perspective of space that appears to achieve, as Haraway puts it, “the god trick of seeing everything from nowhere”.[16] By reverse-engineering the camera positions and reinserting them into the point cloud of spatial data points, there is a reflexive computational connection between data that appears perspectiveless and the human body that produced it. In the series of animations comprising the project, the focus is on the gap between the capturing of data and the computational process that visualises it. The project also juxtaposes cinematic and computational aesthetics to explore the emerging gaze of new technologies.

Figure 9 – Three camera positions are visible and animated simultaneously to show the different positions of the body capturing the video that was the input data for the point cloud.

The project is presented as a series of animations that embody and mediate a critical reflection on computational process. In one animation, the motion of a hand-held camera creates a particular aesthetic that further accentuates the body behind the camera that created the image data set. It is not a smooth or seamless movement but unsteady and unrefined. This bodily camera movement is then passed on to the point cloud model, rupturing its seamlessness. The technique is a way to reinsert the human body and a notion of positionality into the closed-loop of the computational process. In attempting to visualise the process that produces the outcome, reflexivity allows one to consider other possible outcomes, framings, and positions. The animations experiment with a form of situated computational visualisation.

Automata I + II

Figure 10 – A satellite image of the Meeting of Waters in the Amazon region in Brazil. The original image shows the confluence of two rivers that flow together but do not mix. Pixel operations driven by agents change the composition of the landscape.

This work took the form of a series of simulations that critically explored a “computer vision code library” in an open-ended way. The simulations continued an investigation into computational visualisation rather than data visualisation. The process sought to reverse-engineer machine vision software – an increasingly politically contentious technology – and critically reflect on its internal functionality. Here, source code is situated within a social and political culture rather than a neutral and technical culture. Instead of using a code library instrumentally to perform a task, the approach involves critically reading source code as a cultural text and developing reflexive visualisations that explore its functions critically.

Many tools we use in design and visualisation were developed in the field of computer vision, which engineers how computers see and make sense of the world, including through camera-tracking and the photogrammetry discussed previously. In Automata I, the OpenCV library (an open-source computer vision code library) was used. Computer vision is comprised of many functions layered on top of each other acting as matrices that filter and analyse images in different ways to make them interpretable by algorithms. Well-known filters are “blob-detection” and “background subtraction”. Simply changing a colour image to greyscale is also an important function within computer vision.

Figure 11 – A greyscale filter shows the algorithmic view of the same landscape and computational data.

Layering these filters onto input images helps to understand the difference between how humans see the world and interpret it and how an algorithm is programmed to see the world and interpret it differently. Reading the code makes it possible to understand the pixel logic at play in the production of a filter, in which each pixel in an image computes its values based on the pixel values around it, producing various matrices that filter information in the image. The well-known “cellular automata” algorithm applies a similar logic; a “Langton’s ant” uses a comparable logic.

A series of simulations were created using a satellite image of a site in the Amazon called the Meeting of Waters, which is the confluence of two rivers, the dark-coloured Rio Negro and the sandy-coloured Amazon River. Each river has different speeds, temperatures and sediments, so the two rivers do not merge but flow alongside each other in the same channel, visibly demarcated by their different colours.

The simulations were created by writing a new set of rules, or pixel logics, to compute the image, which had the effect of “repatterning” it. Analogously, this also appeared to “terraform” the river landscape into a new composition. The simulations switch between the image that the algorithm “sees”, including the information it uses to compute and filter the image, and the image that we see as humans, including the cultural, social and environmental information we use to make sense of it. The visualisation tries to explore the notion of machine vision as a “hyperimage”, an image that is made up of different layers of images that each analyse patterns and relationships between pixels.

Automata II is a series of simulations that continue the research of machine vision techniques established in Automata I. This iteration looks further into how matrices and image analysis combine to support surveillance systems operating on video images. By applying similar pixel rule sets to those used in Automata I, the visualisation shows how the algorithm can detect motion in a video, separating figures in the foreground from the background, leading to surveillance.

Figure 12 – Using the OpenCV code library to detect motion, a function in surveillance systems. Using a video of a chameleon, the analysis is based on similar pixel operations to Automata I.

In another visualisation, a video of a chameleon works analogously to explore how the socio-political function of surveillance emerges from the mathematical abstraction of pixel operations. Chameleons are well-known for their ability to camouflage themselves by blending into their environment (and in many cultures are associated with wisdom). Here the algorithm is programmed to print the pixels when it detects movement in the video and remain black when there is no movement. In the visualisation, the chameleon appears to reveal itself to the surveillance of the algorithm through its motion and camouflage itself from the algorithm through its stillness. An aesthetic contrast is created between an ancient animal captured by innovative technology; however, the chameleon resists the algorithm’s logic to separate background from foreground through its simple embodiment of stillness.

Figure 13. The algorithm was reconfigured to only reveal the pixel operations’ understanding of movement. The chameleon disguises or reveals itself to the surveillance algorithm through its motion.

The work explores the coded gaze of a surveillance camera and how machine vision is situated in society, politically and apolitically, in relation to the peculiarly abstract pixel logics that drive it. Here, visualisation is a reverse-engineering of that coded gaze in order to politically situate source code and code libraries for social and cultural interpretation.

Final Thoughts

Applying critical theory to data practices, including data-driven design and data visualisation, provides a way to interrupt the adherence to the neutral-objective narrative. It offers a way to circulate data practices more justly back into the social, political, ethical, economic, legal and philosophical domains from which they have always derived. The visual techniques presented here, and the ideas about what form a critical data visualisation practice could take, were neither developed in tandem nor sequentially, but by weaving in and out of project developments, exhibition presentations, and writing opportunities over time. Thus, they are not offered as seamless examples but as entry points and options for taking a critical approach to working with data in design. The proposition of situatedness as a territorial, social, and political quality that emerges from decolonial and feminist epistemologies is one pathway in this work. The field of critical data studies, whilst still incipient, is developing a rich discourse that is opportune and constructive for designers, although not immediately associated with visual practice. Situatedness as a critical data visualisation practice has the potential to further engage the forms of technological development interesting to designers with the ethical debates and mobilisations in society today.

References

[1] L. Gitelman, “Raw Data” is an Oxymoron (Cambridge, MA: MIT Press, 2013).

[2] d. boyd and K. Crawford, “Critical Questions for Big Data: provocations for a cultural, technological, and scholarly phenomenon”, Information, Communication & Society 15 5 (2012), 662–79.

[3] R. Kitchen, The Data Revolution: big data, open data, data infrastructures & their consequences (Los Angeles, CA: Sage, 2014).

[4] A. Iliadis and F. Russo, “Critical Data Studies: an introduction”, Big Data & Society 3 2 (2016).

[5] Y. A. Loukissas, All Data are Local: thinking critically in a data-driven world (Cambridge, MA: MIT Press, 2019), 3.

[6] Ibid, 23.

[7] Ibid, 2.

[8] Ibid, 10.

[9] Ibid, 10.

[10] D. Haraway, “Situated Knowledges: the science question in feminism and the privilege of partial perspective”, Feminist Studies 14 3 (1988), 575–99.

[11] S. Harding, “‘Strong objectivity’: A response to the new objectivity question”, Synthese 104 (1995), 331–349.

[12] P. H. Collins, Black Feminist Thought: consciousness and the politics of empowerment (London, UK: HarperCollins, 1990).

[13] C. D’Ignazio and L. F. Klein, Data Feminism (Cambridge, MA: MIT Press, 2020),152.

[14] Ibid, 162.

[15] N. P. Snyder and L. L. Kammer, “Dynamic adjustments in channel width in response to a forced diversion: Gower Gulch, Death Valley National Park, California”, Geology 36 2 (2008), 187–190.

[16] D. Haraway, “Situated Knowledges: the science question in feminism and the privilege of partial perspective”, Feminist Studies 14 3 (1988), 575–99.

Suggest a Tag for this Article
Subscribe To Prospectives To Automatically Receive Curated Issues By Our Advisory Board Twice A Year!

£30