Algorithmic Neighborhoods

The mapping of space and/as algorithmic redlining

wells lucas santo
22 min readJan 6, 2023

This essay was written as part of a Digital Media Theory course that I took between September — December 2022 at the University of Michigan, taught by Prof. Lisa Nakamura. The goal of the essay was to write about the digital, while drawing primarily from course readings. The essay is reproduced below, without any edits (and only the addition of the YouTube embed for The Grid). I hope to upload a PDF version of this essay once my website is back up.

1. Setting the space

Credit: Daft Punk / Walt Disney Records

The Grid: a digital frontier. I tried to picture clusters of information as they moved through the computer. What do they look like? Ships? Motorcycles? Were the circuits like freeways? I kept dreaming of a world I thought I’d never see. And then one day… I got in.

— Kevin Flynn (Jeff Bridges), Tron: Legacy

Two years before William Gibson’s development of the term “cyberspace” in Neuromancer, ten years before Neal Stephenson’s coining of the term “metaverse” in Snow Crash, and twenty-one years before a real, navigable virtual space was finally realized in Linden Lab’s Second Life, the 1982 cult classic film Tron appeared in theaters with the vision of a world inside the computer. The film follows protagonist Kevin Flynn, a computer programmer and arcade owner, as he is forcibly digitized and transported into the corporate mainframe system known as the Master Control Program (MCP). Like many of the works of science fiction that would follow it, Tron and “this notion of cyberspace was inspired by the 1980s Vancouver arcade scene and visions of a dystopian techno-Orientalist future” (Chun 2021); however, what is distinctive and visionary about Tron is that unlike the other versions of cyberspace, the digital space presented in the film was not a fully separate world of its own constructed of software, but rather a literal representation of the hardware of the MCP, with Flynn traveling through the actual circuits of the system, interfacing with the computer memory and input/output components along the way.

Like Tron, I imagine digital space anew in this essay by focusing not on a simulated virtual reality, but on the literal mathematical feature space in which algorithms operate and perform their functions and calculations, which I call algorithmic space. It is in this space that I hope to extend on Wendy Chun’s conception of digital neighborhoods, developing in section 2 the idea of algorithmic neighborhoods, or clusters of data points and vectors that have been spatially grouped together by contemporary machine learning algorithms. I argue that we are mapped into this algorithmic space as its residents by a reduction into data points as part of larger data sets, broken down into our constituent features in a way reminiscent of Gilles Deleuze’s “dividuals”, while encoding with us the systems of oppression that exist in the analog world. In section 3, I propose that this particular mapping of neighborhoods operates as and is the consequence of a phenomenon I call algorithmic redlining, mirroring the systemically segregating nature of redlining in geographic space rooted in racial capitalism. Finally, in section 4, I use this framing of algorithmic neighborhoods and redlining to think about an example of non-networked, algorithmic relationality in the space of generative AI. My goal is to provide an alternate way of envisioning algorithmic bias and how discrimination can be systematically coded into algorithmic systems, by developing on the ontology of an algorithmic neighborhood.

2. Mapping neighborhoods in algorithmic space

In Discriminating Data, Wendy Chun (2021) develops the idea of the digital neighborhood to describe logical groupings of Internet users based on similarity. Chun gives the example of Cambridge Analytica, which used the five-factor OCEAN model of personality in order to correlate Facebook users’ likes with identity markers and group them into “categories such as ‘gun-toting white men’ to better target and transform” them. While one of the specific goals of Cambridge Analytica was to target groups of individuals in order to influence their beliefs and behaviors surrounding the 2016 US presidential election, these neighborhoods could be used more generally to serve recommended content to users, such as on Netflix, YouTube, or Amazon, based on the idea that similar users might want to see or purchase similar things. It is for this reason that, as Chun notes, “by the early twenty-first century, the imaginary of the Internet had moved decisively from the otherworldly expanse of cyberspace to the domesticated landscape of well-policed, gated ‘neighborhoods’” — put simply, neighborhoods allow for better control over users, to aid digital platforms in satisfactorily delivering their content while generating additional profits. Such is the familiar enterprise of platform capitalism.

Though the Internet has moved toward a model of neighborhoods, Chun argues that “U.S. settler colonialism and enclosure underlay the visions of both neighborhoods and cyberspace.” Indeed, core to the idea of neighborhoods is its entanglement with systems of oppression such as colonialism, racial capitalism, and racial segregation. To develop her analysis of neighborhoods, Chun focuses on the mid-twentieth century history of liberalism that has led to segregation in public housing projects in the United States and how the concept of “homophily”, or the idea that “similarity breeds connection”, has worked to form both geographic and digital neighborhoods. While homophily in geographic neighborhoods has been fundamental for residential segregation in a way that “naturalizes discrimination” through the grouping of alike individuals on the basis of race, Chun shows how homophily in the digital inherits the same characteristics and operates as a “control mechanism” that works to “foster the breakdown of seemingly open and boundless social networks into a series of poorly gated communities.” Here, it is important to note that homophily, for Chun, is distinctly rooted in network science, as is her formulation of digital neighborhoods. In other words, both homophily and digital neighborhoods are described as fundamentally networked phenomena, in which ties and closures exist between “like” individuals who act as nodes in a network. In contrast, I propose the existence of another type of digital neighborhood that also shares the structuring characteristic of homophily, one that is formed not through locality in a network, but through locality in the mathematical feature space of a machine learning algorithm, which I refer to as algorithmic space.

In order to discuss algorithmic space, it is necessary to first understand the importance of quantifiable features in machine learning. Algorithms such as neural networks cannot directly process analog information such as photographs, sound, or film in the way that humans do. That information must first be quantified into discrete units; this conversion is what Lev Manovich (2001) calls “digitization,” which he notes “involves inevitable loss of information” due to the sampling of a continuous range of values down to a finite, discretized set. The digitization, or datafication, of people is thus necessarily a reductive process. But it is not enough to digitize analog information when it comes to machine learning algorithms. A further step is needed in order to represent objects in terms of their constituent “features”; for example, a person could be described in terms of their height, age, weight, gender, and so on, with the selection of what set of features to use differing based on the task at hand. It is through this process of selecting features and assigning them quantifiable values that analog objects–people included–are made legible to algorithms. The algorithmic feature space, then, is the multi-dimensional space that describes all possible values for each feature in the dataset, with each dimension spanning the possible values for each feature. (For example, in a dataset that accounts for twelve distinct features, there exists a twelve-dimensional space with each data point consisting of twelve values that map it to a particular location in that space.) Internet users can be imagined as the residents of such an algorithmic space as data points in a larger data set, each broken down into their constituent features in a scheme that is reminiscent of Gilles Deleuze’s use of the term “dividuals.” As Hu (2022) describes, “data scientists consider ‘you’ to be plural because a user is really a unique but ever-changing collection of data” — indeed, “you” are but a collection of discrete feature values to the algorithm.

Machine learning engineers very frequently conceptualize machine learning tasks in terms of the feature space, drawing on notions of distance to denote similarity or difference between any two data points. That is, any given pair of data points with similar values for each of their features would be mapped onto similar locations. Even without networked ties, there is a sense of locality between these data points, which can represent anything from an image of a person’s face to the composition of their online social media profile. In other words, we are mapped onto this space with locality to others, not because we have networked ties to them, but because of similarities in features. Neighborhoods of data points therefore form as a result of these similarities.

But while features in the past were manually selected by machine learning engineers in a process fittingly called feature engineering, contemporary neural network algorithms are able to themselves select the “best” features to use for making predictions or decisions in a process called feature learning. In this process, certain features are omitted, while others are transformed in order to better separate data points into clusters, or neighborhoods. Figure 1 visualizes three different possible arrangements of data points given two features to be learned (therefore represented by a two-dimensional space). As seen in Figure 1, the ideal outcome of feature learning is to select features that segregate data points of different classes into distinct neighborhoods.

Figure 1. Three graphs displaying possible results of feature learning, the goal of which is to select features that allow for a clean grouping of ‘like’ data points. The graph on the left is the “ideal” situation because all of the data points in each of the two classes (colored as red and blue) are in separate clusters, or neighborhoods. The middle graph shows a more “realistic” situation where the clusters are mostly separated in space, with a very minor mixing of data points from different classes. The graph on the right is a “poor” situation because the learned features do not adequately segregate data points of different classes. Figure credits go to Tom Grigg and his post “Concept Learning and Feature Spaces” in Towards Data Science.

What is done with the outcome of feature learning depends on the machine learning task at hand. In the supervised learning task of classification, each data point is presumed to have a correct label or class with the goal of the task to learn a boundary or classifier that helps demarcate data points of one class from another. This is visualized in Figure 2. In the unsupervised learning task of clustering, no particular labels are given to each of the data points beforehand; instead, machine learning is used to discover distinct neighborhoods of data. In both cases, it is beneficial to the accuracy of the algorithm to develop neighborhoods of “like” data points, while creating distance from “unlike” data points, in a manner that mirrors the concept of homophily described by Chun and network scientists. It is because of this shared characteristic that I deem these algorithmic neighborhoods, which extend the properties and histories of Chun’s neighborhoods. Restated in these terms, feature learning is the mechanism by which homophily emerges in the non-networked algorithmic space, producing algorithmic neighborhoods based on similarity.

Figure 2. The drawing of a decision boundary in a classification task. Spatially, the goal of a classification task is to develop a boundary such that data points belonging to one class fall on one side of the boundary, while data points belonging to another fall on the other side of the boundary. One can imagine the familiarity this has with the project of redlining. Figure credits go to Tom Grigg and his post “Concept Learning and Feature Spaces” in Towards Data Science.

Louise Amoore, a political geographer, recognizes the importance of space to algorithms, writing in Cloud Ethics that “in this book I understand the spatial logic of algorithms to be an arrangement of propositions that significantly generates what matters in the world”, referring to the power of algorithms supplied by the illusion of objective certainty–and thus incontestability– to reify the differences between individuals rendered as data points. Amoore argues that it is in these “calculative spaces where prejudice and racial injustices can lodge and intensify,” which perhaps is a digital extension of Chun’s geographic observation that “segregation reinforced white supremacy by making ‘race dependent on space.’” This is why it is important for Amoore’s “cloud ethics” — an ethicopolitics of algorithms that is “concerned with the political formation of relations to oneself and to others that is taking place, increasingly, in and through algorithms” — to be taken into account; in a sense, my conception of the algorithmic neighborhood is an ontological extension of the polymorphic cloud that Amoore speaks of. A sufficient cloud ethics, one that recognizes the spatiality of algorithms, is therefore also a neighborhood ethics. In the following section, I develop on this idea of racial injustice and segregation in space by arguing that the formation of algorithmic neighborhoods mirrors the discriminatory practice of redlining.

3. From analog redlining to algorithmic redlining

Redlining is a term that refers generally to racial discrimination in housing, emerging from twentieth century housing practices that were used to deny services, such as bank loans or mortgages, to Black residents of specific metropolitan neighborhoods. The practice was called redlining because of the literal coloring of neighborhood maps by the Home Owners’ Loan Corporation (HOLC) to help enact discriminatory policies that contributed to residential segregation, a manifestation of racial capitalism. Rothstein (2017) describes how “a neighborhood earned a red color if African Americans lived in it, even if it was a solid middle-class neighborhood,” and how “neighborhoods colored red on its maps (i.e., redlined neighborhoods) … put the federal government on record as judging that African Americans, simply because of their race, were poor risks” for mortgages. For Rothstein, it is important to note that redlining was not merely an act of de facto segregation (via private practices), but one of de jure segregation, with such racial discrimination literally codified in the law and enacted at all levels of government.

Though redlining would finally be made illegal with the passage of the Fair Housing Act of 1968, the impacts of this practice can still be felt in the present day, especially with algorithms drawing on historical data in order to inform their decisions as well as the selective deployment (or refusal) of digital services tied to neighborhoods that were once formed through redlining. This two-pronged phenomenon has often been called digital redlining, a term which was popularized over the previous few years by Chris Gilliard to refer to “the creation and maintenance of tech practices, policies, pedagogies, and investment decisions that enforce class boundaries and discriminate against specific groups … most frequently on the basis of income, race, and ethnicity” (Oremus 2021). This discrimination is often realized through the fact that zip codes serve as proxies for race, as a result of the notably systemic practices of segregation via analog redlining. A particular example of this was documented by journalists David Ingold and Spencer Soper of Bloomberg News, who found across dozens of metropolitan areas that the initial roll-out of Amazon’s Prime Free Same-Day Delivery service “excludes black ZIP codes to varying degrees”, often mirroring the sorts of residential and racial divides drawn on redlined maps. This initial denial of services was based on the computational decisions of algorithms driven by the capitalist logics of profit and informed by the racist history of redlining. Furthermore, digital redlining is also connected to the phenomenon of the digital divide, with residents of different neighborhoods having differing access to digital services such as (affordable) broadband Internet. It is in this way that digital redlining has not only denied analog services to specific neighborhoods using the digital, but has also denied access to the digital itself based on the boundaries of those very same neighborhoods.

Drawing from this historical context, I situate algorithmic redlining as a form of digital redlining, though specifically in reference to the discriminatory and disparate impact of algorithmic decision-making with regard to marginalized identities such as race, gender, sexuality, and class through the redlining of algorithmic space. I argue that through the distribution of individuals into different neighborhoods in this mathematical space, different neighborhoods (that can be grouped based on identity markers) can receive different experiences and qualities of service, in a way that draws from and amplifies the systems of oppression that exist in the world, encoded into quantified data. Moreover, I contend that algorithmic redlining not only serves to help us describe the algorithmic neighborhoods that are formed in algorithmic space, but like in the analog corollary of housing, it functions as the method in which these discriminatory neighborhoods are formed. It is not that marginalized individuals choose different neighborhoods to exist in, but rather the infrastructure with which they have to interface — whether it be the banking and housing systems of the Jim Crow era or the algorithms of today — dictate it for them. The segregation is both structural and structuring.

A visual aspect that is notable is how similar the redlined maps used by HOLC look like the sorts of neighborhoods that are formed in algorithmic space. As Chun (2021) notes, “Machine learning is filled with ‘neighborhood’ methods used for pattern recognition, such as ‘K-nearest neighbor,’ ‘K-means testing’, and ‘support vector machines’ (SVMs).” These are methods, like neural networks, which operate specifically by constructing neighborhoods from input data. Figure 3 visualizes the k-Nearest Neighbors clustering algorithm, which “draws boundaries between data points based on proximity; it presumes that those data points closest to one another geographically or topographically are of the same class” (Chun 2021). k-Nearest Neighbors has typically been used to discover novel groupings in data, but can also be used to serve targeted content and ads to individuals that are mapped into the same neighborhood.

Figure 3. On the left, the redlined 1938 HOLC map of Oakland (credit: Mapping Inequality). On the right, a color-coded visualization of the k-Nearest Neighbors algorithm using three neighborhood groupings (credit: Wikipedia user Agor153).

But how can one argue for the existence of algorithmic redlining in the case when algorithms might not include race as an explicit feature? One starting point is to turn to proxies.

We have seen how zip codes can act as proxies for race not as an accident, but as a direct result of the systemic, discriminatory practice of redlining. Benjamin (2019) describes how “racialized zip codes are the output of Jim Crow policies and the input of New Jim Codes,” which is a term she employs to refer to technologies that are seen as “more objective or progressive than the discriminatory systems of the previous era” on the grounds that they do not explicitly encode race as a feature. She derives the concept of the New Jim Code from Michelle Alexander’s The New Jim Crow, in which Alexander argues how systems such as the prison-industrial complex are described as “color-blind” in an effort to make them seem non-discriminatory, even as they are reproducing various dimensions of oppression across the matrix of domination. Though the algorithm does not “recognize it is learning racial preference,” Benjamin (2019) contends that all its “variables are structured by racial domination — from job market discrimination and ghettoization … measuring the extent to which an individual’s life chances have been impacted by racism without ever asking an individual’s race.” Thus, it is not that individual proxies happen to coincidentally correlate to race; instead, it is the entrenchment of these features in histories of racism that systemically structures them. And it is using these racialized features that algorithms form neighborhoods in algorithmic space, through the process of algorithmic redlining. Given this, I propose the phenomenon of algorithmic redlining to be an alternate, spatial way of understanding how discrimination occurs in algorithms.

For example, we can look at the classic case study of “Discrimination in Online Ad Delivery” from Sweeney (2013), where names associated with Black individuals surface ads for criminal arrest records in Google Search even if individuals with those names do not have any such records for arrest, while White-associated names do not surface such ads even if individuals with those names do have records for arrest. This has been cited in the machine learning space as an exemplar of racial algorithmic bias, with names serving as a proxy for race. While the typical understanding of this case study is that ad delivery algorithms have learned to correlate names with a racialized history of arrest records, stemming from a system of mass incarceration that disproportionately imprisons Black individuals, I argue that we can think about algorithmic redlining to re-visualize this example by imagining names as data points in an algorithmic space, segregated into different neighborhoods based on correlations to race emerging from historical systems of oppression, with each algorithmic neighborhood being served criminal arrest record ads differently. This matches the scheme of redlining because individuals (based on the feature of their name) are spatially segregated into neighborhoods based on a correlation to race, with these neighborhoods determining the deployment of services (in this case, arrest record ad delivery) that can lead to more harmful, material outcomes (such as disqualification in job application due to an employer seeing the arrest record ad when searching up your name). In this way, algorithmic redlining can be described as the spatial process in which algorithms make this discriminatory outcome real.

Returning to Cloud Ethics, Amoore mirrors this sentiment in stating that “algorithms are not merely finite series of procedures of computation but are also generative agents conditioned by their exposure to the features of data inputs.” Race and difference are codified into these algorithms, not as additional steps in a sequence (defying a simplistic understanding of racial algorithmic bias being coded into the lines of a program), but rather in a segregated mapping into the space in which algorithms represent and reason about our world. Mirroring the move from Newtonian to quantum physics, the quantified state of difference and discrimination is no longer a scalar value along a line (of code), but rather a vector in multi-dimensional space. As Beller (2021) describes, “the algorithm becomes the management strategy for the social differentiation introduced by and as information–a heuristic, becoming bureaucratic, becoming apparatus for the profitable integration of difference”; the algorithm, now bureaucratic, yet primarily deployed by private corporations, acts in place of federal government and HOLC to reify social difference, “invested in and vested by racial capitalism.” Though homophily is used to help group neighborhoods of data together based on likeness, difference (in the form of algorithmic redlining) is ultimately the systemic operation used to further divide dividuals in algorithmic space. While Beller’s World Computer is the virtual machine of racial capitalism, algorithmic redlining is the program that runs on this virtual machine.

4. Algorithmic redlining in generative AI

In this final section, I draw on a particular case study in the area of generative AI, drawing on a non-networked form of relationality and locality to show how algorithmic redlining can intersectionally segregate users based on both race and gender. In order to make sense of the generative AI example, we must first understand a particular “new form of relationality” described in Hu (2022) that will set the scene. Hu analyzes UK artist Erica Scourti’s exhibition So like You (2014), where she “uploaded her old vacation photographs to Google’s reverse image search,” finding that since “vacation photographs often resemble each other,” this search returned pictures of strangers with whom Scourti had no networked ties, yet still felt a sense of connection to. Hu notes that “to choose strangers based on the visual similarity of their vacation photos may seem like an arbitrary way of making a connection, but it nevertheless offers a new kind of proximity that is orthogonal to one’s ‘likes’” — I argue that this proximity is one that exists in algorithmic space, where vacation photos are quantized by their features and represented as data points, with a literal proximity to other data points (photos) that have similar values for each of their learned features. While Hu goes on to describe this as one of “new forms of relationality that are already flourishing within the space of digital capitalism, rather than outside it”, I argue that such a relationality can still be worrying, as it is not unrelated to histories of discrimination. Thinking in terms of the similarity between photos, it is possible for individuals to be redlined in their neighborhoods based on physical appearance–again, stemming from the reduction of their images into quantified data points that maps them onto algorithmic space with locality to those similar in appearance. Though the example of relationality in Google’s reverse image search is mostly innocuous, the more recent example of Lensa in generative AI exemplifies just how harmful this could be.

Lensa, whose name and connotation are coincidentally similar to the sexist test image used throughout the history of computer science known as Lenna, is a mobile app developed by Prisma Labs that in December 2022 released a new feature called Magic Avatars, which uses artificial intelligence to generate a new, artistic portrait of a user based on a minimum of ten selfies that they upload. The feature is powered by Stable Diffusion, a “transformer” model of artificial intelligence that is able to generate new images based on text and/or existing images used as input, trained controversially on stolen artwork across the web. Part of the process of developing the Stable Diffusion model involves the learning and weighting of features from digital images, which creates the terrain or neighborhood boundaries for the algorithmic space that inputted selfies are mapped into. Because of the effectiveness of these transformer models in generating often incredible images or text (as in the case of the GPT models), the space known as “generative AI” has found particular attention and praise in recent years. However, in the use of Lensa, “many users — primarily women — have noticed that even when they upload modest photos, the app not only generates nudes but also ascribes cartoonishly sexualized features” (Snow 2022). Researcher Olivia Snow describes how Lensa AI has sexualized not only photos of her current self, but also photos from her childhood at the age of six. Women online who have used Lensa have across the board noted this sexualization of their images, while men have surprisingly noted a lack of this sexualization, even when they upload NSFW photos of themselves. One can imagine what is happening on Lensa today as a form of algorithmic redlining on female-coded bodies that segregates individuals into algorithmic neighborhoods based on features that are (pseudoscientifically and stereotypically) correlated to gender, serving individuals from the female-coded neighborhood sexualized images of themselves.

Journalist Melissa Heikkila reports a similar experience to Snow, noting that while she expected to receive the sort of fantastical avatars based on her selfies that many others had received from Lensa, ranging from “astronauts, fierce warriors, and cool cover photos for electronic music albums”, Lensa returned for her–like Snow–various sexualized avatars, many of which were topless or nearly clothless, even when this was not at all reflected in her input images. She notes that “I have Asian heritage, and that seems to be the only thing the AI model picked up from my selfies,” further observing that her white female colleagues received “significantly fewer sexualized images.” As Heikkila explains, much of this has to do with how the Stable Diffusion model that powers Lensa has been trained on the open-access LAION-5B dataset, which was created by non-selectively scraping images off the internet. Something that has been observed about the LAION-5B dataset is its heavy inclusion of NSFW content of women of color in particular, as well as “pictures reflecting sexist, racist stereotypes”, which “leads to AI models that sexualize women regardless of whether they want to be depicted that way” (Heikkila 2022). With Lensa redlining women-coded individuals to particular algorithmic neighborhoods, we can extend the metaphor and imagine it as an algorithmic representation of the red-light districts that have existed throughout history, except we are relegated to them against our will, without say in whether we find ourselves and our bodies in these neighborhoods. As the original conceptions of cyberspace drew on a fetishized Orientalism, so has the algorithmic space of Lensa, which in some ways can be distortedly compared to the pre-1950’s akasen in Japan, or a literal “red line” that was used by Japanese police on maps to distinguish the boundaries of red light districts. In this reprehensible example of Lensa’s generative AI, neighborhoods have been formed in the multi-dimensional algorithmic space based on both race and gender, with a spatial locality based not in either networked nor geographic similarity (as the women of color who use the app do not need to be nearby or connected with one another), but rather on similarity in intersecting dimensions of marginalization. Thus, I argue more generally that algorithmic neighborhoods not only help us spatially reason about systemic discrimination that occurs in machine learning algorithms, but it can also do so with respect to intersectionality and the matrix of domination by leveraging the properties of multi-dimensional algorithmic space.

5. Conclusion

Thinking back to the words of Wendy Chun, “cyberspace was never meant to be a happy place,” with the algorithmic space of today as but an extension of cyberspace, its racism (and sexism) reproduced ad infinitum. As dividuals captured and discretized by algorithms, we are like Kevin Flynn in Tron and Tron: Legacy–it is not that we have chosen to enter this space, but rather, we have been forced in by the system itself. Furthermore, we have little agency in deciding where in the space we are mapped onto, as the learned features and weights that enable these mappings are based in data drawn from a racialized history of segregation and subjugation, with the exact parameters of these algorithms fine-tuned by corporations optimizing for profit. The Grid of Tron is redlined.

In summary, I offer an alternate way of seeing how discrimination can be coded into algorithmic systems, by drawing on the ontology of a neighborhood in algorithmic space. This is a novel way of thinking about locality and homophily that is not tied to geographic nor networked neighborhoods, but from those formed due to similarity in feature values. As a result of the encoding of systemic and structural discrimination into digital, algorithmic systems, the feature space of algorithms is therefore redlined, possibly resulting in harmful and disparate impacts for historically marginalized individuals who are processed by these algorithms. In other words, as physical space has been redlined, so has algorithmic space.

So what can we do to resist or refuse this reality? One possibility comes to us from Johnson (2018) in the form of Black digital practice, with the example of the Trans-Atlantic Slave Trade Database as a way to work with “data with intention” by accompanying the data with “humanistic analysis.” Though the Trans-Atlantic Slave Trade Database is still a database that requires the quantization of individuals, drawn from the archive of slaving manifests, Black digital practice serves as “a methodology attuned to black life and to dismantling the methods used to create the manifests” (Johnson 2018). We can invest in practices such as this to fight the reductive logics of algorithms, data, and correlation, and to undo or repair the boundaries created by redlining. We require a Cloud Ethics to deal with relationships that are no longer networked or geographic, but still spatially understood and made real by algorithms to be understood as ethicopolitical beings. Perhaps in a techno-optimists lens, just as housing reparations exist to remedy the effects of analog redlining, we can develop algorithmic reparations to counteract the impacts of algorithmic redlining. Such work has already begun to occur (So et al. 2022; Davis, Williams, and Yang 2021). But these are suggestions for those with the means to resist and refuse these algorithms of oppression. For the rest of us, taking a page from Tung-Hui Hu, perhaps all we can do in the meantime is feel lethargic.

References

Alexander, M. (2010). The New Jim Crow: Mass Incarceration in the Age of Colorblindness. New York: The New Press.

Amoore, L. (2020). Cloud Ethics: Algorithms and the Attributes of Ourselves and Others. North Carolina: Duke University Press.

Beller, J. (2021). The World Computer: Derivative Conditions of Racial Capitalism. North Carolina: Duke University Press.

Chun, W. (2021). Discriminating Data: Correlation, Neighborhoods, and the New Politics of Recognition. Massachusetts: MIT Press.

Davis, J. L., Williams, A., and Yang, M.W. (2021). Algorithmic reparation. Big Data & Society, 8(2).

Deleuze, G. (1992). Postscript on the Societies of Control. October 59: 3–7.

Grigg, T. (2019). Concept Learning and Feature Spaces. Towards Data Science. Retrieved from https://towardsdatascience.com/concept-learning-and-feature-spaces-45cee19e49db.

Heikkila, M. (2022). The viral AI avatar app Lensa undressed me — without my consent. MIT Technology Review. Retrieved from https://www.technologyreview.com/2022/12/12/1064751/the-viral-ai-avatar-app-lensa-undressed-me-without-my-consent/.

Hu, T-H. (2022). Digital Lethargy: Dispatches from an Age of Disconnection. Massachusetts: MIT Press.

Ingold, D., and Soper, S. (2016). Amazon Doesn’t Consider the Race of Its Customers. Should It? Bloomberg. Retrieved from https://www.bloomberg.com/graphics/2016-amazon-same-day/.

Johnson, J.M. (2018). Markup Bodies: Black [Life] Studies and Slavery [Death] Studies at the Digital Crossroads. Social Text: 36 (4(137)): 57–79.

K-nearest neighbors algorithm. (2022). In Wikipedia. Retrieved from https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm.

Kosinski, J. (Director). (2010). Tron: Legacy. Walt Disney Productions.

Lisberger, S. (Director). (1982). Tron. Walt Disney Productions.

Manovich, L. (2001). The Language of New Media. Massachusetts: MIT Press.

Nelson, R.K., et al. (n.d.). Mapping Inequality. American Panorama, ed. Robert K. Nelson and Edward L. Ayers. Retrieved from https://dsl.richmond.edu/panorama/redlining.

Oremus, W. (2021). A Detroit community college professor is fighting Silicon Valley’s surveillance machine. People are listening. The Washington Post. Retrieved from https://www.washingtonpost.com/technology/2021/09/16/chris-gilliard-sees-digital-redlining-in-surveillance-tech/.

Rothstein, R. (2017). The Color of Law: A Forgotten History of How Our Government Segregated America. New York: Liveright.

Snow, O. (2022). ‘Magic Avatar’ App Lensa Generated Nudes From My Childhood Photos. Wired. Retrieved from https://www.wired.com/story/lensa-artificial-intelligence-csem.

So, W., et al. (2022). Beyond Fairness: Reparative Algorithms to Address Historical Injustices of Housing Discrimination in the US. In 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT ‘22).

Sweeney, L. (2013). Discrimination in Online Ad Delivery. Available at SSRN: https://ssrn.com/abstract=2208240.

--

--

wells lucas santo
wells lucas santo

Written by wells lucas santo

queer, southeast asian educator on societal implications of artificial intelligence. now a phd student.

No responses yet