Bryan-Kinns, Nick (2024) Reflections on Explainable AI for the Arts (XAIxArts). Interactions, 31 (1). pp. 43-47. ISSN 3005-0731
Type of Research: | Article |
---|---|
Creators: | Bryan-Kinns, Nick |
Description: | Current Explainable AI (XAI) research mostly examines functional or technical explanations of what an AI is doing, for example, providing an explanation of how an image classifier works to help debug it when misclassifications are made. In these settings, there is typically a right answer, or correct set of outputs, that we are trying to train the AI to arrive at. In the arts there are no right or wrong answers, no correct set of outputs. In the arts we are often interested in outcomes that are surprising or unusual, or even disturbing and disruptive. Furthermore, in creative practice the focus is usually on the output itself rather than detailed explanations of how it was produced. What then does it mean to explain AI models in an artistic context? How are such explanations different from more functional explanations and context? And what insights might exploring these questions provide for XAI research more broadly? To begin exploring these questions, I brought together an international team of researchers to host the first international workshop on explainable AI for the arts (XAIxArts) at the 2023 ACM Creativity & Cognition conference. In this paper, I'll reflect on the key themes that emerged in the workshop, what we learned about XAI and the arts, and how that might relate to XAI more broadly. |
Publisher/Broadcaster/Company: | Springer |
Your affiliations with UAL: | Research Centres/Networks > Institute for Creative Computing |
Date: | 10 January 2024 |
Digital Object Identifier: | doi.org/10.1145/3636457 |
Date Deposited: | 20 Jun 2024 13:30 |
Last Modified: | 20 Jun 2024 13:30 |
Item ID: | 22068 |
URI: | https://ualresearchonline.arts.ac.uk/id/eprint/22068 |
Repository Staff Only: item control page | University Staff: Request a correction