A Review of Safiya Noble’s Algorithms of Oppression: How Search Engines Reinforce Racism

Joe Lehnert

Joe Lehnert is a PhD Candidate in the Politics department at the University of California, Santa Cruz. His broader research interests include critical international relations theories, political and legal anthropology, and discourses and practices involving data-based technologies, such as big data and algorithms. He is currently working on his dissertation which aims to address the historical and political intersections of law, technology, and humanitarian governance and how, why, and to what end traditional and non-traditional humanitarian organizations have and continue to adopt and utilize data-based technologies.

The timely 2018 book Algorithms of Oppression: How Search Engines Reinforce Racism by Safiya Umoja Noble considers the far-ranging implications and effects of commercial search platforms upon social, political, and economic life. Organized into six substantive chapters, Noble’s work issues a clarion call for thinking about the consequences of a range of digital media and technologies for both individuals and communities, as well as contemplating the future of knowledge and information in the public sphere. In chapter one, Noble examines the role corporations, particularly Google, play in the production and curation of public information. Chapter two explores how commercial Internet search reinforce stereotypes, particularly those centered on race and gender. The future of information culture is taken up in the third chapter via consideration of the non-neutrality of information and how information culture can be reconfigured to help combat social inequality. Noble follows this chapter with the most heavily academic discipline-oriented chapter of the book, offering a critique of information studies, and how problems related to reliability and trustworthiness of public information can be managed by information studies professionals. Chapter five considers the future of public knowledge, in light of the increasing prevalence and importance of commercial search engines. Chapter six calls for greater public policy attention to the regulation of information, especially in a time when corporations are increasingly dominant.

Noble believes that interdisciplinary scholarship will be useful for better describing and understanding the effects of algorithmically driven platforms by contextualizing them in their sociohistorical context. Secondly, Noble calls for a broader dialogue between social sciences, humanities, technologists, and policymakers to balance the effects of artificial intelligence-informed decision making with that of human decision making.

A range of methods and data are marshaled and employed in order to advance the book’s arguments, including, but not limited to, media and textual searches (primarily Google searches), cultural and social history, disciplinary/intellectual history, and a “case study” interview. Reminiscent of Melvin Kranzberg’s proclamation that “technology is neither good or bad, nor is it neutral” (Kranzberg 1986, 545), Noble builds on and extends the rich and variegated legacy of techno-political scholarship by centering the racist, sexist, and gendered dimensions of digital technology platforms and search practices. Noble adopts an interdisciplinary approach, drawing upon and seeking to contribute to literature in fields such as information studies, library and information science, media studies, communications, gender and women’s studies, Black/African-American studies, and political economy, as well as calling for additional interdisciplinary research among these and related fields of study. However, the primary theoretical lens through which she interrogates and critiques the aforementioned phenomena is that of black feminist theory. For Noble, “using a black feminist lens in critical information studies entails contextualizing information as a form of representation, or cultural production, rather than as seemingly neutral and benign data that is thought of as a website, or URL that surfaces to the top of the search” (Noble 2018, 106). More broadly, Noble’s work interrogates the prevalence of technology in social life–through the related figures of the algorithm and commercial search platforms–and its capacity to reinforce and exacerbate racial profiling. The book is invested in revealing how particular individuals and communities are mediated and commodified by powerful information curators/brokers, such as Google. Returning to Kranzberg’s proclamation, Noble’s work is necessarily political, and evinces commitments to the elimination of social injustice and inequality by problematizing the so-called neutrality of technologies that mediate and order contemporary social, political, and economic life.

In order to elucidate the detrimental consequences of commercial search platforms, Noble introduces the concepts of technological redlining and algorithmic oppression. Technological redlining “consists of the ways digital decisions reinforce oppressive social relations and enact new modes of racial profiling” (Noble 2018, 1). Historically, the practice of redlining was used in the realms of banking and real estate to impose higher interest rates on people of color, and systematically exclude them from obtaining credit or housing. Noble draws attention to how these same logics are migrating to the Internet and other forms of technology that we depend on in our daily lives. Relatedly, the challenge of understanding algorithmic oppression, for Noble, lies in the “recognition that mathematical formulations to drive automated decisions are made by human beings” (Noble 2018, 1). Algorithmic oppression often exhibits a racialized and gendered character and helps draw attention to “algorithmically-driven data failures that are specific to people of color and women” while underscoring “the structural ways that racism and sexism are fundamental” to both offline and online society (Noble 2018, 4).

To further clarify the implications and effects of technological redlining and algorithmic oppression, Noble calls for “moving the research/analytical lens onto the search architecture itself in order to shed light on the many factors that keep sexist and racist ideas on the first page” (Noble 2018, 16). In so doing, the co-operation of power and technology and its intersection with race and gender is elucidated, particularly the ways in which already marginalized groups, such as black women, are represented and depicted via information searches on commercial digital media platforms, often in a stereotypical and even pornographic manner. Responsibility for these egregiously misrepresentative, problematic, and harmful visualizations is often deflected through attribution of responsibility to “system error.” In this vein, the actors that bear responsibility, such as Google, correct the system error that was ostensibly responsible for the misrepresentative and harmful search results. This is a quick fix solution and does nothing to address the underlying structural racism that shows up in online spaces. Noble also calls attention to the related imperative of unpacking how and why racial stereotypes show up in the technology in the first place (or how and why they persist) and what the consequences and harm are for those individuals that embody these misrepresentations.

The first chapter, “A Society, Searching,” is the most far-ranging chapter. The chapter asks the reader to consider why and how racist and sexist ideas show up in our search results in the first place. The chapter explores the nature of commercial search engine results from 2009-2015, which as Noble admits may be a limitation of her inquiry as it only “captures aspects of commercial search at a particular moment” (Noble 2018, 16). As a way to address and remedy the aforementioned problems and issues, Noble calls for “framing the implications of artificial intelligence, such as the perpetuation and (re)consolidation of racist and sexist representations of individuals and collectivities, as a human rights issue” (Noble 2018, 28). She offers the example of the expansion into military-grade robotics, drone technology, fiber networks, and behavioral surveillance technologies on the part of Google’s parent company Alphabet as entry points for thinking about the impacts of artificial intelligence as a human rights issue. The human rights frame Noble calls for also serves to demonstrate the value data and information hold as decision-making procedures, for corporations and individuals alike. For example, questioning and critiquing the ostensibly democratic nature of the Internet, and the complicity of both the medium and its users in inculcating and codifying racism and sexism, can reveal the racializing potential of (commercial) technologies that we have become reliant upon. Noble is quick to remind readers that companies that provide these technologies exist in part by generating profit via search optimization procedures. Search engine optimization (SEO) “is the process of using a range of techniques, including augmenting HTML code, web page copy editing, site navigation, linking campaigns and more, in order to improve how well a site or page gets listed in search engines for particular topics” (Noble 2018, 47). SEO muddies the legitimacy and viability of information via the introduction of advertising and commercial interests into search results. This calls into question the presumed trustworthiness and reliability of information provision on the part of commercial search engines. While Noble does not offer a narrative of strict causality in the book’s evaluation of the impact of information obtained via commercial search platforms upon individual behavior and decision-making, it does make clear that the information gathered digitally via commercial search platforms is oftentimes not diverse or intersectional in character, and can “reframe our thinking and deny us the ability to engage deeply with essential information and knowledge we need, knowledge that has traditionally been learned through teachers, books, history, and experience” (Noble 2018, 116). The often delimited nature of search engine results “feigns impartiality and objectivity” (Noble 2018, 117) and can potentially serve to confirm pre-existing intellectual, social, political and/or cultural biases as a result, often with detrimental consequences.

At the conclusion of chapter three, “Searching For People and Communities,” Noble writes “there is no federal, state, or local regulation of the psychological impact of the Internet, yet big data analytics and algorithms derived from it hold so much power in over-determining decisions” (Noble 2018, 118). Noble’s broader arguments about racial and gender bias and discrimination in online tools and technologies, such as commercial search engines, is underscored by the very prevalence of these tools and technologies in our daily lives. The crux of this power to over-determine in a racialized and gendered manner lies in the centrality of big data to contemporary institutional and individual decision making. The wide-ranging sectoral and institutional presence of big data leaves us at the mercy of these often opaque data-based decisions. As the Council for Big Data, Ethics, and Society notes, “the explosion of data collection, sharing, and analytics known as big data is a rapidly sprawling phenomenon that promises to have tremendous impacts on economics, policing, security, science, education, policy, governance, health care, public health, and much more” (Metcalf, Keller, and boyd 2016, 3). Noble’s work engages with the ever growing presence and presumed utopian potential of big data by revealing the everyday violences committed and perpetuated by the unceasing collection of data, and the ascription of meaning and authority to it.

Technologies, such as the search engine, are not neutral arbiters of information. By refusing to depoliticize the co-optation of authority by technology we can, in the words of Noble, “intervene in the spaces where algorithms are becoming a substitute for public policy debates over resource distribution- from mortgages to insurance to educational opportunities” (Noble 2018, 90). For example, in chapter six, “The Future of Information Culture,” Noble references a 2007 Brandeis University study that found the wealth gap between whites and blacks in the U.S. has quadrupled from 1984 to 2007, making the white population five times as rich as the black population. Noble argues that trends such as this are connected to or exacerbated by the influence of algorithmic-driven decisionmaking: “it is a result of digital redlining and the resegregation of housing and educational markets, fueled by seemingly innocuous big data applications that allow the public to set tight parameters on their searches for housing and schools” (Noble 2018, 167). The preceding examples, rather than leaving us feeling defeated, should be heeded as calls to action. Noble concludes this chapter by reminding readers we are living in an era in which political, social, and economic life and decision making is fueled by an often blind or limited “big data optimism” (Noble 2018, 169) and that this “should be a wake up call for people living in the margins, and people aligned with them, to engage in thinking through the interventions we need” (Noble 2018, 169).

 If any statement characterizes the overarching orientation and scholarly contribution Noble’s work makes, it is the preceding statement. In general, the tone of the book is cautionary and critical, while also not losing sight of opportunities to rein in the excesses of technology and to ensure it is truly open and democratic. It is a timely and necessary intervention, as the stakes around the questions Noble poses and analyzes are quite high in the age of big data optimism. If search and knowledge practices are truly to be democratic, powerful actors such as Google need to be held to account. If the lessons of the 2016 U.S. presidential election and Cambridge Analytica scandal have taught us anything, it is that the production, curation, and dissemination of information, factual or otherwise, can have vast and profound consequences.

Noble’s call for increased regulation and governance of digital media technology and commercial search platforms is well-taken and one that this author vociferously echoes. However, the regulation/governance imperative is met with particular challenges, such as the rate of technological development outpacing subsequent legal developments and the lack of political will. Ensuring legal and regulatory frameworks adapt and develop coterminously with technology is one of the challenges to address. One such effort on this front is “right to be forgotten” laws in the European Union (EU), which offer significantly more protections than those found in the U.S. legal system. However, “right to be forgotten” legislation is limited in the sense that it is organized around national boundaries, and thus corporations, such as Google, are able to evade these laws by indexing information on domains outside of Europe that are not subject to the same laws and regulations. However, right to be forgotten laws can be “an incredibly important mechanism for thinking through whether instances of misrepresentation can be impeded or stopped” (Noble 2018, 123).

In this vein, Noble takes up the notion of private ownership of identity on the Internet in the conclusion. Noble examines the “consequences of the lack of identity control” (Noble 2018, 173) through an interview with Kandis, the owner of an African-American hair salon, regarding her experience with Yelp, a prominent business advertising and marketing platform. This interview demonstrates in a personal way how algorithmic oppression works and affects Kandis’s quality of life as a small business owner. Kandis conveyed in her interview with Noble several concerns, chief among them a lack of trust in terms of surrendering personal information to Yelp, and that platforms like Yelp are working to redefine notions of “value,” as Yelp obligates its users to pay for the right to be featured in their search results. Kandis conveyed that she felt valueless and invisible as a business owner as result of her experience with Yelp. In reflecting on her interview with Kandis, Noble underscores the felt effects of algorithmic oppression: “the attempts at implementing a colorblind algorithm in lieu of human decision making has tremendous consequences. In the case of Kandis, what the algorithm says and refuses to say about her identity and the identity of her customers has real social and economic impact” (Noble 2018, 179). The interview with Kandis is a powerful anecdotal example that demonstrates the potential consequences of attributing too much decision-making responsibility to algorithms, artificial intelligence, and commercial search platforms etc., but conducting additional interviews to broaden the scope of the analysis could have more amply demonstrated the extent of algorithmic oppression.

Noble’s work raises numerous vital questions and potential solutions to problems posed by modern search technologies. The wide range of literature and data present in the book underscores how multifaceted these concerns are, yet the book is unwieldy at times and would benefit from limiting the scope of analysis. However, this concern pales in comparison to the thoughtful way in which Noble addresses the questions at hand. The potentially emancipatory power of laws and regulations, such as the “right to be forgotten,” is for Noble a form of resistance to push back against the oppressive potential of modern search technologies. As Noble’s interview with Kandis showed, the presence of algorithms loom large in contemporary political, social, and economic life, to the point that their effects have been implicitly accepted by some as a cost of living in a technologically-mediated world, but we also need to rethink whether we desire to rely so ardently on those technologies. Indeed, the moment to “take notice and reconsider the affordances and the consequences of our hyper-reliance on these technologies as they shift and take on more import over time” (Noble 2018, 181) is now.

In order to conduct this analysis, Noble proposes future research efforts that examine “questions that can help us understand the role of the design of platforms, interfaces, software, and experiences as practices that are culturally and gender situated and often determined by economic imperatives, power, and values” (Noble 2018, 179). In order to do so, we also need to “decouple advertising and commercial interests from the ability to access high-quality information on the Internet, especially given its increasing prominence as the common medium in the United States” (Noble 2018, 179). Noble’s far-ranging and insightful work is appealing for its commitment to cultivating interdisciplinary scholarship and seeking to engage with a variety of audiences. We need a diverse array of actors that are committed to ridding scourges such as racism and sexism from our political, social, and economic lives, as Noble calls for in the epilogue: “now, more than ever, we need libraries, universities, schools, and information resources that will help bolster and further expand democracy for all, rather than shrink the landscape of participation along racial, religious, and gendered lines” (Noble 2018, 186). This is a cross-cutting issue, and a diversity of academic and non-academic constituencies will likely find Noble’s work thoughtful and engaging.

 

References

Boellstorff, Tom, and Maurer, Bill (eds.). 2015. Data, Now Bigger and Better! Chicago: Prickly Paradigm Press.

Kranzberg, Melvin. 1986. “Technology and History: Kranzberg’s Laws.” Technology and Culture 27 (1): 544-560.

Metcalf, Jacob, Keller, Emily F., and boyd, danah. 2016. “Perspectives on Big Data, Ethics, and Society.” The Council for Big Data, Ethics, and Society: 1-23.

Noble, Safiya Umoja. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. New York: New York University Press.

An interdisciplinary, open-access, peer reviewed online journal.