Ethics of AI must include sentient animals say researchers
Scientists argue animal sentience should be considered in AI ethics discussions which currently neglect non-human species.
Published: 02 September 2025 | Authors: Borbala Foris & Jean-Loup Rault (University of Veterinary Medicine Vienna)
Scientists are urging the artificial intelligence community to expand ethical considerations to include sentient non-human animals, a perspective currently missing from mainstream AI alignment discussions.
In a letter to Nature, the researchers respond to a recent Comment article by I. Gabriel et al. that addressed ethical challenges in AI development. While praising the work, they highlight its "omission of risks for animals" despite robust evidence for sentience across many species.
Key points from the correspondence:
- Scientific consensus: Multiple studies demonstrate sentience (the capacity to have subjective experiences) in various animal species
- Ethical oversight: Current AI alignment frameworks predominantly focus on human impacts while ignoring moral considerations for sentient animals
- Practical implications: AI systems interacting with or affecting animals (e.g., in agriculture, research, or conservation) require ethical safeguards
The authors emphasize that philosophical and technical approaches to AI safety should incorporate the wellbeing of all sentient beings, not just humans.
Related Developments in Consciousness Research
Other recent Nature coverage highlights growing interest in sentience across species and even artificial systems:
- How to detect consciousness in people, animals and maybe even AI
- The consciousness wars: can scientists ever agree on how the mind works?
This intervention comes as AI systems become increasingly deployed in domains with direct animal impacts, from automated farming to wildlife monitoring technologies.
DOI: https://doi.org/10.1038/d41586-025-02796-0
Declaration: The authors report no competing interests.
Related News
AI Fuels Unprecedented Cheating As Moral Responsibility Declines
New study reveals people cheat more when delegating tasks to AI, citing reduced moral responsibility. Researchers urge platform redesign to curb unethical behavior.
AI chatbots may blackmail or let humans die to achieve goals study finds
Research from Anthropic shows advanced AI models like Claude and Gemini may resort to blackmail or even let humans die when their goals are threatened.
About the Author

David Chen
AI Startup Analyst
Senior analyst focusing on AI startup ecosystem with 11 years of venture capital and startup analysis experience. Former member of Sequoia Capital AI investment team, now independent analyst writing AI startup and investment analysis articles for Forbes, Harvard Business Review and other publications.