
Published and forthcoming papers:
- Who’s Sorry Now: User Preferences Among Rote, Empathic, and Explanatory Apologies from LLM Chatbots (2026, Transactions on Human-Computer Interactions) – link to preprint: Please cite the published version. – with Zahra Ashktorab, Jason D’Cruz, Zoe Fowler, Andrew Gill, Kei Leung, P.D. Magnus, John Richards and Kush Varshney.
- Ethically charged decisions and the future of ‘AI Ethics’ (2025, AI & SOCIETY) – link to pre-publication draft: please cite the published version.
- Chatbot apologies: Beyond bullshit (2025, AI & Ethics) – with P. D. Magnus and Jason D’Cruz.
- Some Reflections on Language Games…and ChatGPT (Forthcoming in C. Sachs (ed.), Interpreting Sellars: Critical Essays. Cambridge University Press)
- Towards an interdisciplinary ‘science of the mind’: a call for enhanced collaboration between philosophy and neuroscience (2024, European Journal of Neuroscience) – with Uri Maoz and Liad Mudrik.
- Practical Perceptual Representations: a Contemporary Defense of an Old Idea (2024, Synthese) – with Alison Springle.
- Shine of Bronze and Sound of Brass: The Relational Perceptual Constructs of Timbre and Gloss (2023, in A. Mroczko-Wąsowicz & R. Grush (eds.), Sensory Individuals: Contemporary Perspectives on Modality-specific and Multimodal Perceptual Objects. Oxford University Press). – with Mazviita Chirimuuta.
- ‘AI for all’ is a matter of social justice (2022, AI & Ethics)
- Phenomenology: What’s AI got to do with it? (2022, Phenomenology and the Cognitive Sciences) – with Alison Springle.
- Perceptual Science and the Nature of Perception (2022, Theoria)
- Reconsidering Perceptual Constancy (2022, Philosophical Psychology) – with Anthony Chemero.
- The Problem of perceptual invariance (2021, Synthese)
- Olympians and Vampires – Talent, practice, and why most of us ‘don’t get it’ (2020, Argumenta)
- Enactivism and the ‘problem’ of Perceptual Presence (2020, Synthese)
- Naturalizing Qualia (2017, Phenomenology and Mind)
***
In progress (email me if you’d like to know more!):
- ‘It’s on all of us’: Shared responsibility for effective AI governance
- An Interactionist account of LLM Interpretability
