AI’s moral compass and social impact

Artificial Intelligence and Ethics: AI’s moral compass and social impact 

Artificial Intelligence. Ethics. Morality. Is there a confluence of these or are they independent of one another?From its inception, AI and ethics 

have been a topic of debate. There is a strong morality element associated with how AI makes decisions, especially because there is less transparency and no explanation provided for the solution. When it comes to decision making, machines powered by AI and ML, study data and make decisions based on the data that has been fed to them. As such, while humans look at a situation or information subjectively, dismiss certain facts and take the emotional quotient into account, AI led machines make decisions objectively, based only on facts and historical data only. 

This naturally has factored in matters of ethical concerns of Artificial Intelligence. 

Of course there has been regular debates and discussions on laying down principles and guidelines for ethical AI development, but universal adoption by AI developers and AI development companies is yet to be formally established. There are institutions situated in the UK, Europe and other countries that are already working on establishing a formal code for using AI in the public and private sector.

What does Ethics in the development of AI mean?

When humans make decisions, they are both responsible and accountable for it. When machines make decisions, the responsibility and accountability for it lies with no one in specific. That’s something to think about.

AI with its potential to redefine digital transformation in organizations and societies, comes with its own set of challenges based on matters of human morality. 

The moral ‘eye’ in AI

The lack of a consistent ethical framework also stems from the fact that a lot of AI research and development is usually undertaken by the private sector. Global enterprises like Google, Facebook, Apple, Amazon are already in the race for AI supremacy and are investing heavily in the development of next-gen AI, minutely linked to their business interests and success. And at times, these AI systems do not take ethical code into account. 

Take for instance, the case of Tay, Microsoft’s AI chatbot that was launched on Twitter (in 2016) as an experiment – and which Microsoft had to pull down in less than 24 hours after it went rogue making racist and offensive comments. While technically, Tay was only analysing data and learning the end result was most unexpected and alarming for its developers. 

Artificial Intelligence at a price

The mattern of ethical issues in AI, also extends to the nature of intelligence that is being developed. For instance, AI in healthcare is a much debated topic with artificial intelligence being used extensively for medical diagnosis. While in several cases these diagnoses are accurate and quick, there is always risk associated with it. 

This was obvious in the case of IBM Watson AI Health. Started in 2013, the AI platform was built to help with cancer treatment recommendation. Though there has been several partnerships formed over the years, the platform (initially at least) was not a particular hit among doctors, considering the platform could make recommendations that were incorrect and could have dire consequences.

And finally, there is the matter of machines turning on their creators. This is not an alien idea altogether, and the idea has been given thought through the world of cinema (think of the classic movie I, Robot, and the discussions it initiated on the ability of machines to turn against its creator, harm humanity or itself), it is indeed a matter that cannot be ignored.

Common Ethical Challenges of AI

There can be no end to emphasizing the importance of data to AI led decision making. A simple example is that of the Football World Cup of 2018. With the popularity of AI in the sports industry it was not unlikely to have all the top AI analytics firms using their AI tools for making predictions. Unfortunately, except for Electronic Arts, who were able to predict the winner correctly as France, most of the leading players failed to make accurate predictions. While this happened primarily because of the lack of enough relevant data, the matter remains that it’s data that drives good AI decision making. 

At a time when machines can ‘think’ cognitively, and technology innovation is plumenting, the question is: Have we reached a stage where AI software has become more intelligent than humans? Or are the reins still in human hands?

Unemployment and inequality in wealth: What happens to human jobs and the subsequent rise in inequality?

Unemployment is one of the most talked about challenges of AI. Global recession is rising, with economies growing weaker, employment rate falling, and a general alarm among people about not having enough jobs for humans. And that’s not false altogether. One of the strongest advantages of using AI, ML, RPA and other deep tech in organizations today is that machines can do the work faster than humans.

Artificial intelligence powered chatbots, for instance, can seamlessly communicate with users, answer queries, make recommendations etc. without human intervention. What’s more, they function 24×7 and can engage with multiple users at the same time. Though from the perspective of an organization it optimizes cost and resource, it poses the question of humans losing jobs to robots, and a divide of inequality thriving in society.

This will of course not happen in a day, it will take time. But organizations across industries are preparing for it. For instance, in the United States, states like Wisconsin, Michigan, Indiana are among the first states to be hit by this wave of AI-based unemployment considering the states’ extensive investment in automation for its manufacturing sector.

Similarly, with the mainstream adoption of self-driven vehicles (if and when), there are many jobs that’s bound to be lost. Tesla for instance already has Autopilot, it’s AI for ‘The Semi’ (truck) that promises to make driving safer, but also poses the risk of job loss or at least lower wages for thousands of truck drivers. 

The impact of Artificial Intelligence and automation in organizations is in full swing and while there is fear, let’s not forget the human race’s ability to strive and survive. When steam engines were introduced, there was initial discomfort; when automation took over factories, there was distress; and even more recently when computers and IT surfaced, there was alarm – and yet, the human race has managed to thrive against all odds! 

So, can be said of the current trend as well. The ethical and social implications of artificial intelligence are in plenty, but it is likely to give rise to new opportunities as well. For instance, with machines undertaking the rule-based routine tasks, humans can be employed in more cognitive tasks.

And while job security and inequality in society will be brimming, it appears that a larger socio-political-economic plan has to be weaved to bring balance.

AI bias. How real is it?

And then there is the case of AI bias caused by the AI algorithms. That’s something to talk about.

The challenge with machine AI algorithms is that it has to be taught and trained to function in a certain way. It’s cognitive decision making process is based on historical data fed into it and the additional data it keeps collecting with usage. Decisions and/or recommendations as such are data-driven first. If the nature of data is biased then the entire decision making is affected.

The case of Amazon rejecting its AI technology for recruitment and hiring is probably the most relevant case that can be cited here. The system was faulty as it demonstrated gender bias, by preferring male candidates over female candidates. While technically it was not trained to do so, yet based on the historical data of 10 years, that’s the conclusion it drove to. 

The need of the hour is transparency in data – both in terms of how it is being tracked and how it is going to be used. Proper AI education is likely to result in a more ethical approach to using new tech.

It’s about making a balance. And that’s going to be a time consuming act. 

Transparency. How much information is being tracked? And how are decisions being made?  

Another factor that brings up the ethical challenges of AI, ML and other deep tech is the matter of transparency as to how much information the algorithms are being programmed to acquire. And, how is the data being used for decision making? 

The lack of transparency has put immense pressure on top AI developers to build software that provides transparent solutions. In most cases, there is no explanation provided as to ‘why’ a decision is being made. 

Security. Is AI really secure?

AI is evolving every day. And that is exactly what raises questions on its security and regulations. 

When it comes to risk and security issues, there is also the case of Uber’s self-driving test car to have caused harm to a pedestrian/bicyclist in Arizona. Though the incident was followed by Uber immediately stopping at self-driving vehicle tests in the U.S. the moral impact of the incident cannot be ignored.

There are several other ethical dilemmas concerning A.I. development. For instance, in order to increase usage and engagement, mobile app developers are focusing on building AI-applications that focus on interactive UI UX. The marriage of AI tech into UI UX designs is likely, and ethical AI companies know this and seek to find a balance between ethics and entertainment.

Some matters to ponder over: AI’s social impact

The importance of ethics in artificial intelligence cannot be ignored. Many believe that at the pace innovations in AI in factoring it could be as early as 2030 when AI robots and systems turn more powerful than humans.

AI machines – even the most advanced of them can make mistakes. And the question of ownership of mistakes is undefined. AI and Machine Learning algorithms after all are backed by supervised and unsupervised learning. And in order to get the best out of ML and AI architecture, it’s probably better to use a combination of learning methodologies.

AI is the future and AI has its benefits. The question is not about whether or not to go AI – the question is whom to do this journey with. 

At Day1 Technologies, we believe that we could be the AI technology partner you have been looking for.

Author
Categories
Published

February 28, 2020 1 month ago

Back to Blog Listing
AI in Cybersecurity
5 sucessfull AI uses cases
Popular NLP applications in finance

Have a product idea in mind?
Talk to us about it!