With The Rise of Technology in the World of the Internet of things and Artificial intelligence, a number of questions come to mind.
What are the principles that should govern the way organizations/developers build these technologies some I have at the top of my mind are
- AI must be designed to assist humanity.
- AI must be transparent.
- AI must maximize efficiencies without destroying the dignity of people.
- AI must be designed for intelligent privacy.
- AI needs algorithmic accountability so humans can undo unintended harm.
- AI must guard against bias.
- It’s critical for humans to have empathy.
- The need for human creativity won’t change.
- A human has to be ultimately accountable for the outcome of a computer-generated diagnosis or decision.
Someone once said,
Algorithms don’t exercise power over us people do.
Future Computed, Artificial Intelligence and its Role in Society by Brad Smith, President and Chief Legal Officer Microsoft, and Harry Shum, Executive Vice President of Microsoft AI and Research Group, Microsoft is a good read on this topic.
There are exciting opportunities that AI brings to people and its ability to help achieve more. But it’s also important that as we build we do so upon an ethical foundation. I think that AI technology should embody the following main principles:
Fairness AI must maximize efficiencies without destroying dignity and guard against bias. Algorithms learned from data are increasingly used for deciding many aspects of our life: from movies, we see to prices we pay or medicine we get. Yet there is growing evidence that decision making by inappropriately trained algorithms may unintentionally discriminate against people
Accountability AI must have algorithmic accountability.
Who do you hold accountable for the technology that you build? what happens if someone turns your product into artifacts of war.
Transparency AI must be transparent. There should be a level of transparency to users over what your technology does.
Ethics AI must assist humanity and be designed for intelligent privacy.
There is a growing concern with the moral behavior of humans as they design, construct, use and treat artificially intelligent beings/technology. how do you program the value of human lives into a bot/AI?
Built on trust You should have control of your data and the choice in the technology you use, and hold firm to a set of core principles that are grounded in ethics, accountability, inclusion, and security. If I don’t trust a product would I use it?
Democratizing AI Everyone should have access to the benefits of AI, including the tools it takes to create and transform your work.
More Reads on this
I put this up to trigger a conversation around this especially for developers, do we ever think of this as we build technology.
Let Me know what you think in the comments below.