From Turkers to ChatGPT: How AOC Questions Our Relationship with Artificial Intelligence

In spring 2025, the first issue of the magazine AOC (Analyse Opinion Critique) was published. A white cover, with the title “Artificial Intelligence” in orange.

A delight for text lovers, as the magazine compiles articles previously published on its online platform. It opens with a piece by Alexandre Gefen and Philippe Huneman titled “The Challenges of AI to Philosophy,” which explores the question of accountability in large language models (LLMs). Can we blame an AI for giving false information in the same way we would blame a lifeguard for saying swimming is safe just after raising a red flag? Would an autonomous vehicle’s algorithmic driver be held responsible in the event of an accident? The same question applies to the art world, where AI generates images through “promptography” — what room is left for the uniqueness and originality of artistic works?

Next comes an article by Antonio A. Casilli, “The Automaton and the Drudge,” which reminds us of a role reversal between learning algorithms and humans. Platform users are reduced to data providers, helping machine learning systems expand their datasets — because machine learning still depends on non-automatic labor. This takes us back to the idea of “Turkers,” in reference to the 18th-century mechanical chess player known as the Turk, which was presented as a robot but secretly operated by a hidden human.

This model is reproduced by companies like Amazon, where thousands of “Turkers” perform sorting, labeling, and calibration tasks — what might be called “artificial artificial intelligence.” Amazon Mechanical Turk even became a service to help businesses digitize their processes. These microtasks are highly used in countries like the United States, France, the UK, Canada, and Australia, while the workforce comes mainly from India, the Philippines, Nepal, China, Bangladesh, and Pakistan — an economic dependence with post-colonial overtones, notes the author.

Among the issue’s articles, one that stands out is by Mathieu Corteel, who asks, “Why Don’t AIs Think?” He argues that today’s AIs are less powerful than people think. They remain weak, handling only simple, repetitive organizational tasks. And their biases reflect our own. In an interview conducted by Benjamin Tainturier, Bernhard Rieder raises the issue of copyright for AI-generated creations — particularly when artists’ works are used to train AIs. Those artists could claim compensation for the inspiration their creativity brings to AI systems.

Julie Noirot and Pierre Sujobert take on the sensitive topic of “Thought in the Age of Technical Reproducibility,” at a time when ChatGPT gives anyone access to expert-level language. The academic field finds itself without the traditional gatekeeping that used to define who belonged and who didn’t. This could trigger fears of a redistribution of intellectual capital and the loss of a monopoly over scholarly discourse. But the authors downplay this concern, pointing out that the internet — once thought to democratize access to knowledge — did not ultimately upend the scholarly hierarchy.

To conclude, let’s note that these essays mostly explore the philosophical and societal angles, often neglecting the technical and practical aspects. Yet it’s hard, in our view, to form an opinion without first consulting potential users in the field and analyzing the concrete contribution of AI to society, the economy, and innovation.

 

By Hasnae Chami-Boudrika 

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *

fr_FRFrench