AI @ Work

Research
Project Overview
This report for Future Says_ Foundation provided a meta review of the on the disconnect between the social and technological narratives of AI in the workplace. We focused on AI in use, in particular AI that does not live up to its promise or perceptions. This included AI failures: AI failing to integrate, AI failing to scale, AI failing to launch, and even AI failing to actually be AI. We did not focus on the technology itself malfunctioning or underperforming. Instead, we treated AI in use as a composite of its integrations and transactions.
Method
This research employed qualitative media analysis. We began by analysing over 400 news media and scholarly journals articles for stories covering artificial intelligence (AI) in the workplace. We focused on stories where the narratives around the social and technical did not fit. We focused our search initially on articles from 2019–2020 and on the topic of labour. We included articles from 2016 onwards to capture examples of AI’s implementation and its impacts. The scope of our analysis was global, although in part due to the limitations of the researchers, more stories emerged from the United States, the EU and India.


To organize articles, we constructed a narrative grid with plot synopses. This allowed key themes to emerge from an inductive process, allowing us to construct narratives from the articles themselves, using methods developed by Hodgetts & Chamberlain5.This also allowed us to see what was missing from many news articles about AI. Often the information we were looking for was not explicitly written about as an “AI failure”.The grid allowed us to iterate with a more focused strategy and expand upon the themes that were emerging.

Our results found three main themes:
Integration challenges often occur when there is a disjoint between workersand their employers, or when settings are not primed for AI usage.

Reliance challenges consider how over and under reliance relate to actual andperceived AI in use and practice.

Transparency considers where the work is being done and who does the work ofthese systems. This differs from algorithmic transparency, or understandingthe code underlying AI systems.

The report we wrote was released across Minderoo’s distributed network of AIresearch labs at Oxford, Cambridge, UCLA, NYU and the University of Western Australia and featured in Private Eye. (Life goals!)