Anatomy of an AI System by Kate Crawford
In Kate Crawford’s essay “Anatomy of AI System,” Crawford examines the Amazon Echo as an analogous to human labor, data and earth resources.
She lifts the veil concealing AI technology around The Amazon Echo, by following the life cycle of the Amazon echo, from birth, life and death and by creating an anatomical map of this single AI system that has become an omnipresent device in people’s homes.
She argues how Echo becomes a political device for the elite and shows how high of a cost this simple machine has on the planet, the workers to make them, and the people who use them.
She cites times in history where other similar systems of power were played out through a novel device, revealing historical patterns of how these similar systems of power operate that the Echo is acting out.
She compares the folkloric creature, the Greek chimera, which was part lion, goat, snake, and monster, to the Amazon Echo as also a mythical object with multiple identities (consumer, a resource, a worker, and a product) in which users of the product assume each of these identities at the same time. The ends of which help make the Amazon infrastructure smarter. In this looped and inter-woven process, Amazon is the ultimate beneficiary.
She then references to the statua citofonica – the ‘talking statue,’ created in 1673 by the Jesuit polymath, Athanasius Kircher. Kircher created at the time mystifying “very early listening system” that, similar to the Echo, “could eavesdrop on everyday conversations” unbeknown to the people talking that would act as a form of “information extraction” for the elites. Like the Echo, the statue citofonica was obscure yet elegant, its inner workings and purposes concealed in mystery. As Crawford writes, “the aim was to obscure how the system worked: an elegant statue was all they could see.” And like the Echo, the statue was made of expensive and hard to find materials, which Crawford explains in full.
She warns that the “training sets for AI systems claim to be reaching into the fine-grained nature of everyday life, but they repeat the most stereotypical and restricted social pattern, re-inscribing a normative vision of the human past and projecting it into the future.” It makes me think that in this continually constructed loop that the AI world Echo embodies has the dangerous ability to perpetuate our bad habits and worst behaviors. When Crawford discusses how echo users are continually training the model, I instinctively start to think of how this is applied to every other piece of technology we engage with (Instagram, Facebook, Google, etc).
This piece sheds some light on the implications of participating in those systems. Personally, it scares me, but it almost seems these AI technologies are so ubiquitous these days. Living without them seems nearly impossible in the modern world, but the consequences of living with them may not be too clear to users until it is too late.
Crawford’s reference of seeing media as an extension of the earth opens up her argument into how Amazon’s Echo is part of a larger “exploitation of human and natural resources, concentrations of corporate and geopolitical power, and continual energy consumption.”
Crawford investigates an important aspect of the construction involved in the building of AI devices, revealing the “complex structure of supply chains within supply chains” that goes into harvesting the planetary resources to produce them. I found some further insight into this when I read an article referenced by Crawford, a piece by Jessica Shankleman’s titled “We’re Going to Need More Lithium” (Bloomberg, September 7, 2017, https://www.bloomberg.com/graphics/2017-lithium-battery-future/). I found it interesting that though there is increasing demand of the minerals used to make the batteries used in our devices, this industry challenge never becomes privy to the end customers. As long as the supply is available “even if the companies struggle a bit to keep pace with demand, the global scramble may never be reflected in the sticker price of a Tesla or a MacBook Pro.”
Crawford’s insights remind me of the concerns of theorist and writer, James Bridle, who has addressed similar concerns about the ineligibilities of AI and how all of these technological problems are political one first. His solution is to educate users as much as possible and for people to critical users of the systems of technology that they participate in. However, it makes me wonder that if the systems of technology benefit ultimately only the elite, what is the incentive by those who control these systems to share them?
In her map visualization, she notes how that “man of the triangles…hide different stories of labor exploitation and inhumane working conditions. The ecological price of transformation of elements and income disparities is just one of the possible ways of representing a deep systemic inequality.” This essay was an insightful way to see how system extractions that are often conducted in the background of seemingly benign objects like the Echo are shrouded in mystery and complexity and what an enormous task it took to unpack. Knowing how dangerous and effective AI is, it makes me wonder how do we become more sensitive to how AI is used?
Bridles offers up more education behind this AI systems in an effort for participants to become more conscious, and more critical-thinking participants, but in a world where we know we are being eavesdrop, on the flipside of that, what is the risk of becoming too self-conscious? Also, now knowing this, how can people be educating in how this system is at play in a way that would garner change?