R4

Responses: 10

Anatomy of an AI System
by Kate Crawford

The Amazon Echo as an anatomical map of human labor, data and planetary resources

Read her essay and choose one additional article from the Footnotes section to also read before class and be able to report on.

Use the tag “R4” when you post your assessment of the readings and the questions they raised for you.

Grace Martinez

Reading #4
Anatomy of an AI System by Kate Crawford

In Kate Crawford’s essay “Anatomy of AI System,” Crawford examines the Amazon Echo as an analogous to human labor, data and earth resources.

She lifts the veil concealing AI technology around The Amazon Echo, by following the life cycle of the Amazon echo, from birth, life and death and by creating an anatomical map of this single AI system that has become an omnipresent device in people’s homes.

She argues how Echo becomes a political device for the elite and shows how high of a cost this simple machine has on the planet, the workers to make them, and the people who use them.  

She cites times in history where other similar systems of power were played out through a novel device, revealing historical patterns of how these similar systems of power operate that the Echo is acting out.

She compares the folkloric creature, the Greek chimera, which was part lion, goat, snake, and monster, to the Amazon Echo as also a mythical object with multiple identities (consumer, a resource, a worker, and a product) in which users of the product assume each of these identities at the same time. The ends of which help make the Amazon infrastructure smarter. In this looped and inter-woven process, Amazon is the ultimate beneficiary.

She then references to the statua citofonica – the ‘talking statue,’ created in 1673 by the Jesuit polymath, Athanasius Kircher. Kircher created at the time mystifying “very early listening system” that, similar to the Echo, “could eavesdrop on everyday conversations” unbeknown to the people talking that would act as a form of “information extraction” for the elites. Like the Echo, the statue citofonica was obscure yet elegant, its inner workings and purposes concealed in mystery. As Crawford writes, “the aim was to obscure how the system worked: an elegant statue was all they could see.” And like the Echo, the statue was made of expensive and hard to find materials, which Crawford explains in full.

She warns that the “training sets for AI systems claim to be reaching into the fine-grained nature of everyday life, but they repeat the most stereotypical and restricted social pattern, re-inscribing a normative vision of the human past and projecting it into the future.” It makes me think that in this continually constructed loop that the AI world Echo embodies has the dangerous ability to perpetuate our bad habits and worst behaviors. When Crawford discusses how echo users are continually training the model, I instinctively start to think of how this is applied to every other piece of technology we engage with (Instagram, Facebook, Google, etc).

This piece sheds some light on the implications of participating in those systems. Personally, it scares me, but it almost seems these AI technologies are so ubiquitous these days. Living without them seems nearly impossible in the modern world, but the consequences of living with them may not be too clear to users until it is too late.

Crawford’s reference of seeing media as an extension of the earth opens up her argument into how Amazon’s Echo is part of a larger “exploitation of human and natural resources, concentrations of corporate and geopolitical power, and continual energy consumption.”

Crawford investigates an important aspect of the construction involved in the building of AI devices, revealing the “complex structure of supply chains within supply chains” that goes into harvesting the planetary resources to produce them. I found some further insight into this when I read an article referenced by Crawford, a piece by Jessica Shankleman’s titled “We’re Going to Need More Lithium” (Bloomberg, September 7, 2017, https://www.bloomberg.com/graphics/2017-lithium-battery-future/). I found it interesting that though there is increasing demand of the minerals used to make the batteries used in our devices, this industry challenge never becomes privy to the end customers. As long as the supply is available “even if the companies struggle a bit to keep pace with demand, the global scramble may never be reflected in the sticker price of a Tesla or a MacBook Pro.”

Crawford’s insights remind me of the concerns of theorist and writer, James Bridle, who has addressed similar concerns about the ineligibilities of AI and how all of these technological problems are political one first. His solution is to educate users as much as possible and for people to critical users of the systems of technology that they participate in. However, it makes me wonder that if the systems of technology benefit ultimately only the elite, what is the incentive by those who control these systems to share them?

In her map visualization, she notes how that “man of the triangles…hide different stories of labor exploitation and inhumane working conditions. The ecological price of transformation of elements and income disparities is just one of the possible ways of representing a deep systemic inequality.” This essay was an insightful way to see how system extractions that are often conducted in the background of seemingly benign objects like the Echo are shrouded in mystery and complexity and what an enormous task it took to unpack. Knowing how dangerous and effective AI is, it makes me wonder how do we become more sensitive to how AI is used?

Bridles offers up more education behind this AI systems in an effort for participants to become more conscious, and more critical-thinking participants, but in a world where we know we are being eavesdrop, on the flipside of that, what is the risk of becoming too self-conscious? Also, now knowing this, how can people be educating in how this system is at play in a way that would garner change?

Isabel

The article puts the AI behind Amazon's Alexa device in a broader perspective. I understand the main message of this article to be that AI applications like Amazon’s Alexa are so much more than just a plastic gadget in your living room, meant as a warning. As a piece of technology that we ultimately take for granted, the article contextualizes the device with ideas from history, geography, chemistry and economics – all framed by a strong dose of critical theory.

The article explains how the data that is collected to enable and continuously improve an AI device, is from us humans as a source. “This is a key difference between artificial intelligence systems and other forms of consumer technology: they rely on the ingestion, analysis and optimization of vast amounts of human generated images, texts and videos.” I guess we are increasingly aware of how much a device like Alexa knows about us, especially in regard to information we consider to be private. While we know privacy might be compromised, we don’t always act upon that awareness by eliminating these devices from our lives. With all the data that we put out by using AI, should we change our handling of private data that we ourselves create – don’t give this away as easy for companies to use? Or should AI-driven services and products, built by “a few global mega-companies” AI be more respectful of our privacy? Probably a combination of both.

“Drawing out the connections between resources, labor and data extraction brings us inevitably back to traditional frameworks of exploitation. […] These processes create new accumulations of wealth and power, which are concentrated in a very thin social layer.” The article zooms in on the power implied by employing AI services and products. A power captured by a few. Economy of scale is a key advantage for AI companies: the more data have, the more successful you will be, and the more data you receive. The article is warning for this loop, but does not proceed to ideas on how to reform the system to cope with this self-enforcing enhancement of AI power. Should this be restricted in some form? How can restrictions impede further positive AI development? Is there such a thing as an enlightened form of monopoly in data-driven capitalism?

Kircher's listening system is an interesting analogy to the Alexa as it’s real functionality is not only invisible to the ones being listened to on the piazza, but also to the oligarch using the listening system – the aim was to obscure how the listening statute worked. Like a typical Alexa user has little clue what goes on under the hood of the device, only Amazon the creator knows about the inner workings behind the information extraction. I think Kircher's example shows the importance of transparency as regards to the information gathered by Amazon for each person, and the usage thereof. It is not only relevant to protect privacy itself, but also knowing to what extent your data is used, for what ends, and who has access to this.

Suzanna Schmeelk

Plastic, metals and electricity represent systems that think.  Culture predicts that these elements will produce the singularity.  Kate Crawford takes on this discussion in 21 parts, emphasizing in part XXI "Many of the assumptions about human life made by machine learning systems are narrow, normative and laden with error."  Crawford explains in part XIX, "Every form of biodata – including forensic, biometric, sociometric, and psychometric – are being captured and logged into databases for AI training. That quantification often runs on very limited foundations: datasets like AVA which primarily shows women in the ‘playing with children’ action category, and men in the ‘kicking a person’ category. The training sets for AI systems claim to be reaching into the fine-grained nature of everyday life, but they repeat the most stereotypical and restricted social patterns, re-inscribing a normative vision of the human past and projecting it into the human future."  

In part XVIII, Crawford explains early forms of AI, "In 1770, Hungarian inventor Wolfgang von Kempelen constructed a chess-playing machine known as the Mechanical Turk. His goal, in part, was to impress Empress Maria Theresa of Austria. This device was capable of playing chess against a human opponent and had spectacular success winning most of the games played during its demonstrations around Europe and the Americas for almost nine decades. But the Mechanical Turk was an illusion that allowed a human chess master to hide inside the machine and operate it."  Crawford explains that AI is shrouded in mystery and complexity.  Does AI have to be mysterious and complex?  Are simple principles of the Theory of Relatively and Gravity complex?  Many researchers explain AI as being "complex" and that we just must "accept."  Is strict acceptance true understanding?

Footnote: 29 “Containers Lost At Sea – 2017 Update” (World Shipping Council, July 10, 2017), http://www.worldshipping.org/industry-issues/safety/Containers_Lost_at_Sea_-_2017_Update_FINAL_July_10.pdf.

After hours of labor and invested money, cargo containers are lost at sea.  In fact one of the the most current articles by the NYT asks "Why are Garfield phones still washing up on the seashore in France after decades?"  The answer plain and simple was that a cargo container fell off a ship nearly 30 years ago; and, this leaked container is still gradually 'releasing' Garfield phones onto a French beach.  In the applied world, perfection is nearly impossible to reach.  Most studies are happy with a 95% confidence evaluation.  But, I ask everyone, what about the cases in the 5%?  Are they less important?  I am partciulary biased to the 5%.  My father was a person in the 5% and he has had a survival battle ever since a misdiagnosis.  Turns out, those items in the 5% are extremely important; they could be you.

Clare Churchouse

‘Anatomy of an AI System’ by Kate Crawford for me depicted a huge web of planetary connections between what looks like an innocuous electronic and the highly disturbing account of the resources that it took to make it - as well as the data the device is collecting all the time. And for what purposes? For more capital which is going to just a few. The essay raises the question what the data will be used for. The text is a stark reminder that what looks like sleek, clean, high tech white-washes huge exploitation behind the technology.
As Crawford noted, “The territories are dominated by a few global mega-companies, which are creating new infrastructures and mechanisms for the accumulation of capital and exploitation of human and planetary resources.” and “At every level contemporary technology is deeply rooted in and running on the exploitation of human bodies.” The instances she describes are disturbing and powerful examples.

The writing maps ideas across centuries and across the globe, creating a scale and import in connecting details - the scale though is “almost beyond human imagining.” The essay uses the analogy/format of mapping to help make connections and attempt to give an understanding of what is involved in making AI systems, a scale that is “too complex, too obscured by intellectual property law, and too mired in logistical complexity to fully comprehend in the moment.”
How can we begin to see it, to grasp its immensity and complexity as a connected form? For instance, just asking Alexa, the AI agent, to turn on a light switch “requires a vast planetary network, fueled by the extraction of non-renewable materials, labor, and data. The scale of resources required is many magnitudes greater than the energy and labor it would take a human to operate a household appliance or flick a switch.” And tracing the resources it has taken to make a device is incredibly complex since such vast numbers of companies are involved in sourcing small parts of each appliance’s materials.

The text emphasizes that we are being “tracked, quantified, analyzed and commodified” all the time when we interact with tech platforms. The devices are cloaked in secrecy – the Echo is a sleek box which you aren’t meant to open when the battery runs out. “But in contrast to user visibility, the precise details about the phases of birth, life and death of networked devices are obscured. With emerging devices like the Echo relying on a centralized AI infrastructure far from view, even more of the detail falls into the shadows.”

I read the corresponding piece in Money.com (Julia Glum, “The Median Amazon Employee’s Salary Is $28,000. Jeff Bezos Makes More Than That in 10 Seconds,” Time, May 2, 2018, http://time.com/money/5262923/amazon-employee-median-salary-jeff-bezos/) which quantified Jeff Bezos’ salary in relation to those of his employees. A Nov 2017 UK reporter detailed that those who work for Amazon are, “weren’t allowed to sit, needed to package products every 30 seconds and dealt with timed bathroom breaks.” The May 2018 article noted 560,000 employees (f/t and p/t) “in the U.S., the average hourly wage for a full-time associate in our fulfillment centers, including cash, stock, and incentive bonuses, is over $15/hour before overtime.” Contrast that with numbers worked out from Bloomberg that Bezos was earning Jan – May 2017 about $11.5 million per hour. And on an annual basis, Amazon filed (because of new federal rule that requires public companies to disclose the pay ratio between their employees and executives) that it’s “median worker — the person who makes more than half of the staff and less than half of the staff — earned $28,446 in 2017. For comparison, Bezos’ annual compensation last year was over $1.6 million.”

Where are the unions – why do so many people have so little power compared to one person?
And how can we not be complicit in this exploitation?

caitlyn ralph

I like that the "Anatomy of an AI System" piece starts with a very relatable anecdote. I think the title can be quite intimidating and opening with a scene everyone is mostly familiar with (my friends and I just spent an entire Friday night playing with an Alexa and its Spotify capabilities 😝) is a great way to draw a wide audience, in terms of interest and skill-level, into the relevant topic (I ponder about the accessibility of computational topics to the layperson often).

My first thought was that this should be read by everyone that owns an Alexa—and then I realized that's unrealistic for the average consumer (I don't think I would read it when I bought one). However, I think the format—a video of this, in particular, would be much more efficacious in its message.

The title is also still throwing me off a bit—I'm not exactly interested in reading it based off it, but if it brought in something subversive about the Alexa, I think that would be more intriguing.

Overall, I feel this is a really interesting take on AI. The topic is broken down in a way that links humans to the technology more so than other AI readings (I took a class on Ethics in Computer Science in college, so I've dove into a decent amount with an ethical lens). It feels grounded in social, historical (such as the story of the Jesuit polymath, Athanasius Kircher), and international context. The use of the word anatomy adds a chilling human-like nature to the voice assistant itself, even though the signs of that should have been more obvious to me before reading this piece. Descriptions

A little separated from the rest of the commentary here, I thought this statement was particularly chilling:

"Like a pharaoh of ancient Egypt, he stands at the top of the largest pyramid of AI value extraction."

It's almost scary to compare the leader of these unbelievably massive companies as monarchs—it adds a whole new, questionable side to capitalism.

While I did take a class on Ethics in Computer Science, I'm curious what point-of-view this article on AI and the Alexa is being written in—is it philosophical? Historical? Both?

I think I've spoken about this in class before, but another thought-provoking topic mentioned in the article is the advent of a new form of colonialism, where large companies are monopolizing and controlling internet access is less-developed places. Using a quote from Vandana Shiva, "Now it is the turn of biodiversity and knowledge to be 'enclosed' through intellectual property rights (IPRs)."

The article I chose from the footnotes I chose was from The Guardian about the Royal Free breaching a UK data law called the Data Protection Act in a deal with Google's DeepMind. The app created from the partnership continued to test after patient data was transferred. In this case, culpability on using real data for testing fell on Royal Free rather than DeepMind since they were the data controller.

Mio Akasako

The content of this article was fascinating, from the visualization at the very beginning, to the multiple stories it told that made up the components of the whole. I appreciated the density of it--any one of the components could have stood on its own as its own visualization-story combination. The overall composition of the article helped me think more in depth about what style of storytelling I want to do with my own MS1 project.

The overall composition was very simple yet effective--no interaction, no parallax/scrolling, no fancy click-effects. Just one overarching visualization at the beginning, the story of an AI system with its origin in Amazon Echo broken into 21 parts, and illustrations spread throughout that reference those in the visualization. They only used one color (a dark purple), and a single font (Arial). And still, there was so much information to be gained from the page, especially with the edition of the footnotes. It made me realize that interaction is not necessary to tell an effective story. I don't need to add interaction for the sake of making my project interesting; I should add it only if I think it will contribute to the effectiveness of what I am trying to communicate.

Though in the back of my mind I knew that there was huge production cost, human labor, and environmental destruction that went into all of the electronic gadgets I owned, I did not realize to what extent. I never really thought about it deeply; but it is also true that it is not advertised openly, as is the case with most production lines. I was generally aware of the unequal distribution of wealth and danger to life across the spectrum of people involved (miners vs Jeff Bezos), but I had never properly given thought to where exactly these rare minerals and metals that make up our electronics originate from.

“Your smart-phone runs on the tears and breast milk of a volcano. This landscape is connected to everywhere on the planet via the phones in our pockets; linked to each of us by invisible threads of commerce, science, politics and power.” Write Liam Young and Kate Davies. I was struck at the poignancy of this statement. There were legends built on the landscape of the Salar de Uyuni; even now, it is a destination for backpackers of South America because of its beauty. And through the excavation of it for lithium for electronics, we contribute to its destruction and monetization.

I read a bit more about the Salar in this paper by Clark and Wallis et al, 2017 (https://sci-hub.se/https://onlinelibrary.wiley.com/doi/pdf/10.1111/gto.12186). It states the salar is a potential major source of lithium and contains 50–70 percent of the world’s reserves according to the USGS, and details the legends surrounding the formation of it. It also details how one can detect the rate and effects of climate change by looking at the rock formations surrounding the salar.

Kiril Traykov

Anatomy of an AI System +The false monopoly: China and the rare earths trade

               This article is articulating the point that artificial intelligence is just like any other consumer goods product and there is a long and unjust supply chain to deliver it to us. Based on the author’s descriptions, we can divide artificial intelligence into 2 groups: devices and platforms. The former have the typical supply chain structure in which “rare Earth materials” are excavated, sold, and installed in AI creations following a manufacturing process. This is not different than any other consumer-goods products, such as jeans. The latter, however, is trickier to comprehend because platforms and tools remain mostly conceptual to us and not directly visible as they rest on the cloud in most cases. Yet, even for them, that there are energy emission costs and programmer sweatshops that could be traced and these deleterious practices are questing the ethical aspect of building AI.  

               Overall, the process of creating AI is no different than the many-times-blogged-about unfair supply chains for other consumer products. However, the difference with AI is that the profit pool is vastly untapped at the top for the people holding the production rights of AI. This is because AI could be infinite, in the cloud, never to run out of power, unlike natural materials, which will eventually reach their capacity. In that way, my opinion about this essay is that it sounds a little sinister, attempting to create an apocalyptic vibe using carefully-picked examples and the feeling of the unknown (of the future). Human systems have adapted in the past to structural changes and will do so again. The paradigms for wealth, opportunity, and knowledge creation will shift, but they will not do so overnight and the next generation of humans (Gen Z and after) will have enough time to adapt. So the attempt of the essay to instill fear is rather unfounded, in my view.

               I loved reading the false monopoly article detailing how China does not, contrary to belief, have monopoly over the rare Earth materials (despite producing 85%+ of them). The article states that China simply found a way to produce and export these commodities cheaply, due to its highly developed industry and lower labor costs (not to mention lax environmental regulation and a booming cottage industry of illegal rare earth miners and exporters in which 40% of all production is illegal). These Earth materials are not “rare” to China only, but the profit model by larger corporations dictates going to the most cost-advantageous place and they go to China, as other countries cannot deliver them at the same competitive cost (despite some opportunities to do so in countries like Kazakhstan). So the real issue is the inability of corporations to lift hands from their profit models. If they do so, it will be contradictory to the principle of maximizing shareholder value by delivering profits that increase the investment popularity of the company. So in essence, the issue is the human-developed system for maximizing profit for the people at the top. The human system does not maximize profit for everyone involved, it only seeks to solve for the top. And until AI develops enough to offer some other solutions, “so the cycle will continue”, as the author concludes!

Ryan Best

Anatomy of an AI System by Kate Crawford and Vladan Joler is a super ambitious project detailing the behind-the-curtain processes that go into ever-popular home assistant AI systems that are branded and designed particularly to mask this complexity. I found that distinction to be very interesting - the Amazon echo is a simple and sleek black cylinder with no buttons marketed to be a hassle-free way to make your life a bit more convenient. Meanwhile, the process to excavate and assemble the raw materials it requires, train it through a complex system of human labor and data, and manage the end of its life cycle could not be more "out of sight and out of mind" for its users.

I really valued the perspective of this piece, discussing aspects of our current cultural technological infrastructure that are incredibly costly but are dis-aggregated from the consciousness of the vast majority of that infrastructure's users. Shedding more light on the natural resource strain that is caused by constant phone upgrades is an aspect I'll admit I haven't heard much about or considered, but one that they rightly focus on as the "begin and end" of the process of the Amazon echo dot (as a microcosm of the countless other devices and examples that produce a strain on the finite resources of our plant). This line in particular has stuck with me:

Each object in the extended network of an AI system, from network routers to batteries to microphones, is built using elements that required billions of years to be produced. Looking from the perspective of deep time, we are extracting Earth’s history to serve a split second of technological time, in order to build devices than are often designed to be used for no more than a few years.

Also ever-important is the often unconsidered human cost of the technology supply chain, which includes numerous examples of exploitation and unsafe working conditions. I found the fractal imagery an effective one to portray this message - it is a nice corollary to see how individual triangles form part of a whole. Each triangle is itself incredibly small, but is an integral part of the foundation that enables the creation of this behemoth of a triangle, with a company or individual sitting at the top and reaping most of the rewards.

The authors' perspective on how users of technology are commodified and made both product and worker is also a salient one, one that I think is overlooked by the vast majority of the population. They point out that "the Echo user is simultaneously a consumer, a resource, a worker, and a product", whose interactions with the device are being used to train the AI system and make it that much more sophisticated. In this vein I read the New York Times piece Artificial Intelligence, With Help From the Humans. It references to the example the authors discuss in their reading, the 'Mechanical Turk', which appeared to be a machine which reached an unprecedented level of sophistication at chess, but was actually controlled by an unseen human individual. The way this strategy has been applied to contemporary technology is both ingeniously impressive and frighteningly distressing. We think of feedback loops with AI as humans asking computers a question and getting a result back, while this perspective flips that relationship on its head. The computer attempts to ask a question that a human is much better at answering, using that feedback loop to inform or validate AI logic. Mechanical Turk has used crowdsourcing to find people that can complete simple, almost mindless asks that earn them some easy income (albeit in tiny increments for each task), but provide invaluable insight to AI logic and algorithms that would struggle mightily in completing these tasks without human input.  While Mturk pays individuals micropayments for this service, implementing CAPTCHAs for human verification have effectively unlocked the entire internet userbase as free labor for AI systems in this same way.

From a pure data visualization perspective, I have great respect and admiration for a project that attempts to model something so complex and over-arching like this lifecycle of AI that includes "the history of human knowledge and capacity" only as one aspect of the full visualization, which they put "at the bottom of the map". I think the visualization is largely successful in communicating the complexity of this system, but like we mentioned with the node diagrams, shooting for exhaustiveness in a chart like this can often be a bit hard to ensure and hard to comprehend. I would say that I wouldn't get the same value from the PDF viz without reading the authors' text that goes with it.

Jed Crocker

The most agonizing sections of the Crawford/Joler essay were those that dealt directly with the planet’s physical environment. From the Salar de Uyuni in Bolivia and its associated volcano myths and rapid lithium extraction for batteries to the extinction of the palaquium gutta tree for the use in cable insulation -- the cost that the environment of the planet pays in the name of short-term technological advances and global ‘logistics’ is immense and alarming. I was struck and depressed by the fact that the planet has built up its ‘rare earth’ resources over millions of years, and yet the damage that is done to get at a fraction of what’s produced and then used as a component in some electronic device that has a lifespan of a few years.. it’s appalling! The essay (and its associated map, to a lesser extent) was really effective throughout its descriptions of the systems supporting AI and technology, but the aspects of the entire system that are having the most direct impact on the environment and the clarity to which these were described felt truly like a punch in the gut.

The sourcing of components to make an end product was another outline of this reading that readjusted my focus of how things are made. It’s easy to forget how many vast, invisible networks are caught up in the making of a sleek digital product. The opaqueness of how these supply chains operate is alarming.

I also read the “Containers Lost at Sea” 2017 update report from the World Shipping Council, for more information about all the thousands of shipping containers that are lost at sea every year. It turns out that that figure is very slightly overblown due to ‘catastrophic losses’ which need to be counted separately, it turns out. The handy graph provided (below) shows an enlightening view of lost shipping containers, particularly how devastating 2013 was -- all in part to the tragic sinking of the MOL Comfort, which you can see pictures of and read more about here, (the crew was rescued but that was not the case for the containers!)

The total number of lossed containers does seem to hover around 1000 annually since the year 2010, per the reported data from 80% of the world’s shipping carriers.

I also read some of the Wired article discussing working conditions at Amazon’s fulfillment centers, which left me feeling bleak.

What is the solution for so many dispersed component manufacturers and sourcing for electronic parts and rare earth materials? Truly a wicked problem. It would be great for everyone to consume less but is that realistic? It doesn’t seem like it, with people walking around with multiple smart phones.