We hear a lot about ethical AI. It’s an important topic, and I may be splitting hairs when I apply the adjective “moral.” In this post, I will not cover what I believe to be the difference between ethical and moral. I believe either adjective covers my concern (in this post). I can hear some of you thinking…
“What’s Your Concern, Andy?”
I’m glad you asked. My concern is efficiency.
A lot of the mechanics of achieving AI solutions involves efficiency. In and of itself, efficiency is a good thing. Efficient processes execute faster and produce higher quality results. My concern begins when efficiency is combined with economics. “Don’t you want to make money, Andy?” Yes. Yes I do. Beyond that, I want you, Dear Reader, to also make money.
I do not want AI making life and death decisions based on cost-benefit analyses.
Please note: I have no real-life examples. It’s merely a concern of my paranoid engineering mind. I consider paranoia a virtue for engineers. Worrying about what could happen is, after all, part of the job.
Also, I am aware that in some medical use cases – such as organ transplants – patient benefit is weighed as one of the factors in deciding who gets an organ and who does not. I do not envy those making such decisions; I rather pray for them.
My concern lies with decisions to halt treatment. I have the same concern with automating drones to strike without human decision-makers pressing a key.
Andy’s First Rule of Statistics Applies
What is Andy’s First Rule of Statistics?
You may use statistics for anything about people, except people.
You may read that an be puzzled. Please allow me to explain. I mean you may use statistics (and by extension, AI, ML, etc.) to describe people – applying descriptive analytics; but please do not use statistics (and by extension, AI, ML, etc.) to prescribe things (treatment, activities, etc.) for people.
Those are my thoughts this fine Wednesday morning. I welcome your thoughts and feedback in the comments.