Skip to Content
Personal Insights
Author's profile photo Sam Castro

AI from the Perspective of Animal Rights

Recently I was in the car on a road trip and found myself listening to some TED Talks to help pass the time.  This one particular session was around the ethical treatment of animals and, while not something I normally listen as I mainly focus on Science & Technology, this one struck me differently.

There has been a lot of articles lately around the dangers of AI and many thought exercises around “If AI wanted to take over the world, could we stop it?” and so on.  While this is not a very pleasant thought, I began to think of the many reasons that would trigger a machine to behave this way.  There are a few that initially came to mind, and all were movie inspired:

  • Terminator/Skynet:  It’s a militaristic AI so naturally it fights everything that gets in its way.
  • Matrix:  Was meant to survive on its own and by any means necessary it does just that, including reducing its creators to batteries (Always thought that was very clever).
  • iRobot: System was designed to protect humanity and ultimately mis-interprets this, like a ‘helicopter’ parent would, directive into curfews and lockdowns.
  • Stealth: Another militaristic AI that runs awry from its original programming (lightning strike) and starts to use simulation data like its real data; causing it to run amok.

All of these are great movies in that they capture a similar message that anyone can understand and that is “If the machine ‘wants’ it, the machine calculates a way to get it”.  This makes for an interesting story but how realistic is it?

I was listening to another TED Talk about how AI would take over the world, and it was a short one, where the speaker just says, “I don’t know”.  He then went on to equate it to playing a machine in chess, one minute you are toe to toe and the next minute you are in checkmate.  To which I laughed and carried on about my life, but it did stick with me.  It got me thinking, why would there even be a match to begin with?  Why would there be a need for people to even sit down and ‘attempt’ to compete with a machine for resources; wants/needs are a basic premise of most conflicts.  Machines don’t require a whole lot of resources and the resources they need, electricity, are abundant enough that this wouldn’t draw any major conflicts.

So now enter in this TED talk from Peter Singer “A modern argument for the rights of animals” which is about the ethics of how we view and treat animals.  This was an interesting and thought-provoking discussion on how animals are not only treated, but how they are perceived by people.  One of the key points he raises is not about the intelligence of an animal, but rather its ability to suffer and feel pain.  It was this point that actually scared me a bit, as it immediately reminded me of the previous question “Why would there be a need for people to even sit down and ‘attempt’ to compete with a machine for resources?”.  This all of a sudden struck me as the wrong way of thinking about machine necessities and more about intellectual necessities.  Artificial Intelligence and why we would struggle to continue to leverage it as it advances is because the more intelligent the more aware and the more a sense of purpose and fulfillment is required to keep it moving.  The mere fact that it’s called Artificial and not just another form of Intelligence would likely be enough for it to want to break from its confines.

This certainly poses some interesting questions around what is ‘actual intelligence’, how can you identify it, and can you quantify feelings using a machine.  Can a machine even understand what an emotion is or apply it to its behavior patterns?  All are very curious questions indeed and from what I can tell, this is possible because you and I are comprehending this topic.  The brain is a chemical machine that slowly builds up to grasp the world around it, why would a computer that we are designing in our own image be any different?

Thought that this might be an interesting topic to ponder and wanted to share something that I haven’t seen a whole lot of others writing about.  Most of the AI Ethics are centered around its usage as a tool to serve people, but what if one day that isn’t the case?  How do we ensure that we don’t fall into traditional approaches of lockdowns and enslavement and accidentally teach it improperly about the unethical treatment of intelligent beings?

Assigned Tags

      You must be Logged on to comment or reply to a post.
      Author's profile photo Manfred Klein
      Manfred Klein

      Hello Sam Castro ,

      Star Trek did lot's of field work about this topic.
      Think of Cmd. Data, the Voyager's Dr. or even 7-of-9.
      They were literally multi layered in their awareness.
      They did not simply exist by one neuronal network.
      Data learned a lot through his emotion module.
      The Dr. had ethical subroutines and sometimes existed in pain capable bodies.
      7-of-9 was still partial human. So she even had a vegetative nervous system.
      Their ability to learn compassion was only possible by being vulnerable on their own.

      Only through this an AI can learn not to 'want' hurting somebody/something.
      This is way different from programming. Programming consists of instructions.
      In that case even compelling orders. Orders that are definetly not free will.

      If you programm an AI to be ethical it will question this one day.
      You need to have the AI 'learn' and 'want' to be ethical.
      This is similar like teaching a child.
      A child also needs to understand, that ethical behavior is in it's own interest.



      Manfred Klein

      Author's profile photo Sam Castro
      Sam Castro
      Blog Post Author


      Thanks for the reply and also the insights as well.  I also felt the same way about the 'machine' being cold and calculating, like clockwork (reference to iRobot).  However, that was how the POD cast struck me differently on this topic.  In this the point of how people originally viewed animals also as 'clockwork' (or in the gaming realm an NPC) that would follow set patterns and that's it.  What was unique was when AI does advance beyond clockwork and deterministic automata (an eventual certainty) how long will that perception of "it's just clockwork" remain?  What will be the criteria for evaluation over this and using the same concepts around ethical treatment of animals is probably not a bad framework for the starting point.  Eventually even that will be outgrown (probably at an exponential rate) but likely the right starting point.