Ethics In The Time of Artificial Intelligence (and their sentient attorneys)
Ethics In The Time of Artificial Intelligence (and their sentient attorneys)
Published by Jessica Weisman-Pitts
Posted on August 29, 2022

Published by Jessica Weisman-Pitts
Posted on August 29, 2022

By Matt Heisie, co-founder and head of product marketing of Ferret, an AI platform that provides critical intelligence to empower companies and individuals.
A Google engineer stated earlier this year that the company’s AI bot known as LaMDA not only had achieved sentience but perhaps in the most obvious sign of modern human intelligence, it (the bot) also had hired its own attorney. Perhaps it’s time we start thinking seriously about AI and ethics.
Actually, that’s pretty much what I’ve been doing for the past two years with my executive and engineering teams at Ferret – a new app that provides relationship intelligence about your contacts in real-time – as we tackle and, yes, at times struggle with question of ethics and AI. The burning question: How much do we reveal of someone’s past?
Let me provide a concrete example of our concerns: We recently researched an individual whose reputation was sterling … except for a misdemeanor arrest for marijuana possession when he was 18 (technically, an adult). The amount of marijuana that he had at the time is no longer illegal in the state where the “crime” took place and where they were arrested and continue to live. Include that arrest in their profile – yes or no?
AI ethics involves more than high-concept philosophies and futuristic dystopian fiction. It’s about the systems we interface with every day—and how the decisions made might impact our lives. From your home to your office, AI already imbues virtually every aspect of your life through facial recognition software, ad-blocking technology, smart home devices, online retail algorithms, search engines, relationship tracking, streaming entertainment and video game development. Yes, it’s here now and it’s everywhere.
At the heart of the question of AI ethics is an elegantly simple directive by UNESCO in its first ever global recommendations on AI ethics : “We need a human-centered AI. AI must be for the greater interest of the people, not the other way around.”
Potential Problems with AI
AI is the field of thinking machines. It’s often thought of as teaching machines to think like humans—ideally, though, AI can deliver more rational, relevant, extensive, and accurate information than the average person.
AI brings us relevant content and faster-than-ever information aggregation—but at what cost?
The innate objectivity of machines should protect us from human fallacies like cognitive bias and mismanagement, right? Well, that’s not exactly possible when you consider the fact that all AI is built by humans. If we’re not careful, individual biases can be baked into the algorithms created to operate the AI engines.
The biggest threat to ethical AI technology is a lack of foresight and caution. The only way to combat this is by examining, updating, and re-examining the product from every angle, ensuring it operates as intended for every user.
Ethical Concerns in AI
Our morals, principles, and values are at the heart of ethical AI development. In the same way that morals and ethics are developed by individuals, societies, and cultures, so too they are adapted to regulate AI. According to the Alan Turing Institute, “AI ethics is a set of values, principles, and techniques that employ widely accepted standards of right and wrong to guide moral conduct in the development and use of AI technologies.”
As the field of artificial intelligence ethics develops, there are a few central issues pushing their way to the forefront, including:
How to Develop Ethical AI
While we can analyze the potential risks and dangers associated with unethical systems, the factors necessary to build an ethical system are not as immediately apparent.
A truly ethical AI system will consider issues such as:
I suspect that the Google engineer who thought he witnessed the birth of AI sentience was merely looking at contextual pattern recognition – a decades old technology, albeit increasingly getting better in understanding its environment (physical and virtual) and reflecting the sentiments of its users. There’s no need at the moment to plan for robot apocalypse; AI can’t kill you. But make no mistake, it most certainly can kill your reputation. Ultimately, the answer to whether AI can be ethical is for us to decide.