- Blog/
What we learnt about the ethics of AI in 2021
This year saw artificial intelligence in the headlines: for a new breakthrough natural language model becoming publicly available, Elon Musk revealing a peculiar humanoid robot on stage, and 300 unknown exoplanets being discovered in existing Kepler telescope data by neural networks.
In this time, the world and artificial intelligence research community’s grappling with the ethical implications of this technology became increasingly serious.
Big tech was at the forefront of the challenges posed by balancing ethical considerations and progression. Timnit Gebru’s controversial exit from Google at the end of last year, related to her work scrutinising large language models for their environmental drawbacks, potential biases, and chance of misuse, attracted attention throughout the year. Facebook, too, shut down research on the tribalism and polarisation that their algorithms helped induce.
Twitter’s solution to the challenges that their own algorithms posed–in their case, how their algorithms chose to crop images to fit in a standard aspect ratio–is one that might become increasingly common in the future. Inspired by conventional ‘bug bounties’, in which companies pay hackers to find bugs in their code, the ‘bias bounty’ competition allowed anyone to win rewards for finding prejudices in Twitter’s algorithms. If companies can tolerate the negative press that the discoveries of the bounty will find - certainly uneasy waters to traverse when articles such as ‘Twitter’s racist algorithm is also ageist, ableist and Islamophobic, researchers find’ were written as a result of Twitter’s competition - crowdsourcing research may help ensure that real world partialities are not reflected in the algorithms that increasingly control the information ingested by the billions of users of social media.
Big tech weren’t the only ones to have their attention on the consequences of new A.I developments. One of the world’s most prestigious conferences for the presentation of developments in the field of machine learning, NeurIPS, involved researchers engaging with the question of how to ensure that the societal impact of their work is considered by them personally, while also following universal standards on a wide variety of issues. As such, a paper analysing the difference between broad ethics statements of last year’s conference discovered the importance of ‘clear expectations and guidance’, amongst other factors, in ensuring reflection on the consequences of published work, perhaps explaining this year’s checklist-based approach.
However academia and industry choose to contend with the implications of advances in artificial intelligence, the augmented interested in this field makes one thing clear: AI increasingly shaping the world we live in, and how we understand it, won’t be an unambiguously positive change - unless those at the forefront of the technology make it so.