# Artificial Intelligence v/s Ethics ## The Big Picture There has been a debate between AI and ethics lately. UNESCO has even proposed the development of a legal and global document on the ethics of AI. But what led UNESCO to embark on this proposition? Why are different organizations introducing an ethical standpoint in AI? Let us dive into this article to understand this debate - AI vs Ethics. ![Credits: iuriimotov/Freepik](https://i.imgur.com/lVCTTjO.jpg) ## How Did Ethics Come Into the AI Picture When we talk about AI, ethics is not something that comes in our minds. AI itself means a made-up intelligence. An intelligence that is not real. An intelligence that uses *mathematical data* to evaluate every single thing. If all the decisions and comprehensions are made using data and statistics, how can it be ethically correct or incorrect? Since we are dealing with ethics, there are no black and white answers. You see, the ethics we are talking about are not just about the AI technology but also about the developers of these technologies. ## Why Do Ethics Matter? AI is being used in almost every field now. From automating manual processes to easing the decision-making of judiciary to even creating art pieces, music, and even videos (don't get me started on ChatGPT)! When AI is used to make such critical decisions like in a judiciary, ethical dilemmas are bound to arise. For example, AI algorithms can be biased against black people because the historical "data" says that black people are more likely to commit crimes. Companies like Amazon, Microsoft, and Facebook are hugely invested in the development of AI. With the number of resources and money these companies have, it has become very difficult for small developers to create a technology that could compete with these giants. This could eventually lead to a huge wealth gap; not only among the rich and the poor people but the developed and developing nations as well. Once we dive deep into the world of AI, we can see that AI can be infamous for ethical dilemmas. This is why we can't afford to overlook the involvement of ethics in AI. ## Ethical Issues in AI ### 1. Biased AI ![](https://i.imgur.com/yNqtZaP.jpg) If you search "school boy" on google images, you will see images of ordinary boys in school uniforms. But if you search for "school girl", your search result will most likely be filled with images of women in sexualised costumes. Amazon engineers spent years working on AI hiring software, but they had to shut down the program because they were not able to figure out how to create a model that doesn't discriminate against women. These are some examples of gender bias in AI. The AI technology used in search engines is not gender neutral. It prioritises results based on user preferences and location. The software Amazon was developing also did something similar. These algorithms mimic the biases and stereotypes present in the real world. ### 2. Unemployment And Wealth Inequality AI leads to automation, and automation kills the need for manual labor. According to the [*McKinsey Global Institute report*](https://www.theverge.com/2017/11/30/16719092/automation-robots-jobs-global-800-million-forecast), about 800 million people will lose their jobs to AI-driven robots by 2030. According to Elon Musk, governments will need to provide a universal basic income to support the unemployed population due to the automation of different industries. The issue of unemployment extends to another problem - wealth inequality. The majority of companies depend on hourly workers when it comes to products and services. But with automation, these companies can get rid of these hourly workers and instead deploy bots that can work 24 hours a day. On the one hand, people will start losing their jobs. On the other hand, developers and stakeholders of these AI bots will continue to multiply their wealth. ### 3. AI is Not Perfect AI algorithms are prone to make mistakes. If we can feed perfect data to the algorithms, its performance will improve. But if the data contains some errors or we make errors while programming the algorithms, the algorithm can make mistakes. The question that we have to answer is that do these AI algorithms make more mistakes or fewer mistakes compared to humans? The mistakes made by AI could lead to devastating repercussions. Are we okay with that? ### 4. Security ![](https://i.imgur.com/yUG0YDN.jpg) The AI software developed by companies like Google and Facebook has access to all our sensitive information. We have seen multiple instances of data breaches from big companies. Our data is not 100% secure. AI can also be used in autonomous weapons, robotic soldiers, armed drones, etc. If these systems are hacked, our weapons can cause damage to ourselves. The advancement of cyber security becomes even more important to avoid situations like these. ### 5. AI Rights? If AI evolves to a point where it can feel emotions, should we give it the same rights as humans? If robots have rights, then how will we rank their social status? In 2017, a humanoid robot, Sophia, was granted citizenship in Saudi Arabia. While some consider this a PR stunt, it does set an example of what our future may look like. ### 6. Singularity AIs can learn much faster than humans ever could. With the speed at which AI is being developed, it would not be wrong to assume that humans won't be the smartest species in the future. AI can evolve beyond our imagination and control. This is what people call "singularity". Alpha Go, a software developed by Google, has already defeated the world's number 1 Go player. AI's success makes you think about our future. If AI goes out of our control, we can not depend on cutting the electricity because AI would have become smart enough to counteract this problem. ## Conclusion So we have understood that AI can pose ethical dilemmas. Whether AI is good or bad can be judged on various frameworks. Because no framework is 100% correct, we need to stay updated so that we can make the best decisions in the future. For now, the developers of AI systems must assume responsibility and be held accountable for what they create, design, and program.