Navigating the ethics of artificial intelligence through the lens of Aristotle
Artificial intelligence has been on the rise lately. A.I., specifically text-based artificial intelligence, is a tool that can help students, professors and anyone who is looking for help with their work.
As more people begin to use A.I., some are questioning the ethics involved with its use. Mary Mousa, biology and philosophy major, addresses A.I. ethics with an Aristotelian approach.
Mousa describes Aristotelian philosophy as being focused on the being doing the action, not the action itself. She introduces Aristotle’s idea of eudaemonia, often described as living a life of virtue, as being the ultimate goal for mankind. “Action begets habits, habits begets virtues, virtues begets character,” says Mousa.
Mousa highlights Aristotle’s Golden Mean which states that “virtues are the means between two extremes.” In regards to artificial intelligence, this implies that, in order for A.I. to be ethical, it needs to be morally aware.
But that’s the problem— Mousa’s research suggests that A.I. lacks the ability to replicate human emotions effectively.
Mousa points out that the moral indifference is enough to argue that the use of A.I. is unethical. Her research also concluded that A.I. can “atrophy our ability to critically think,” “encourage shallow thinking,” and “alleviates emotional labor.”
Senior family science major, Zoie West says that while she finds A.I. interesting, she wonders what it is doing with the information she plugs in. “I think it’s a cool concept. If we have the technology to make living more convenient, why not do it?” She says. “At some point, I do start to wonder what A.I. is doing with my information.”
While A.I. can conveniently assist with mundane tasks, such as putting data into a spreadsheet or retrieve information, it is important for us to know how to accomplish these skills on our own.
“I’m a very hands-on learner,” says West. “I hope that A.I. doesn’t take away any kind of learning experiences in the future.”
Mousa offers the argument that research influences the way we learn and it is a skill we need to have. She says “we won’t know if A.I. is reliable if we don’t have prior knowledge.”
Mousa explains that while doing her research she found that A.I. can not site sources traditionally. She suggests that there may be biases, or the A.I. “hallucinates information” if it can’t site where it got the information.
A.I. can benefit users by quickly editing, offering information, and more. It can be referred to as a “tool in a toolbox.” However, Mousa suggests the more we rely on artificial intelligence, the more it becomes unethical. She poses the question: “how can we avoid over reliance on A.I.?”
Mary Mousa’s advice to us is to “do everything in moderation” and “don’t lose aspects of what makes us most human.”