Anatomy of an AI System

The article puts the AI behind Amazon's Alexa device in a broader perspective. I understand the main message of this article to be that AI applications like Amazon’s Alexa are so much more than just a plastic gadget in your living room, meant as a warning. As a piece of technology that we ultimately take for granted, the article contextualizes the device with ideas from history, geography, chemistry and economics – all framed by a strong dose of critical theory.

The article explains how the data that is collected to enable and continuously improve an AI device, is from us humans as a source. “This is a key difference between artificial intelligence systems and other forms of consumer technology: they rely on the ingestion, analysis and optimization of vast amounts of human generated images, texts and videos.” I guess we are increasingly aware of how much a device like Alexa knows about us, especially in regard to information we consider to be private. While we know privacy might be compromised, we don’t always act upon that awareness by eliminating these devices from our lives. With all the data that we put out by using AI, should we change our handling of private data that we ourselves create – don’t give this away as easy for companies to use? Or should AI-driven services and products, built by “a few global mega-companies” AI be more respectful of our privacy? Probably a combination of both.

“Drawing out the connections between resources, labor and data extraction brings us inevitably back to traditional frameworks of exploitation. […] These processes create new accumulations of wealth and power, which are concentrated in a very thin social layer.” The article zooms in on the power implied by employing AI services and products. A power captured by a few. Economy of scale is a key advantage for AI companies: the more data have, the more successful you will be, and the more data you receive. The article is warning for this loop, but does not proceed to ideas on how to reform the system to cope with this self-enforcing enhancement of AI power. Should this be restricted in some form? How can restrictions impede further positive AI development? Is there such a thing as an enlightened form of monopoly in data-driven capitalism?

Kircher's listening system is an interesting analogy to the Alexa as it’s real functionality is not only invisible to the ones being listened to on the piazza, but also to the oligarch using the listening system – the aim was to obscure how the listening statute worked. Like a typical Alexa user has little clue what goes on under the hood of the device, only Amazon the creator knows about the inner workings behind the information extraction. I think Kircher's example shows the importance of transparency as regards to the information gathered by Amazon for each person, and the usage thereof. It is not only relevant to protect privacy itself, but also knowing to what extent your data is used, for what ends, and who has access to this.

Show Comments