Moral and Ethical Issues of Artificial Intelligence
|✅ Paper Type: Free Essay||✅ Subject: Technology|
|✅ Wordcount: 2282 words||✅ Published: 8th Feb 2020|
While some people adore it, others despise it, some people’s jobs rely on it while other people’s jobs are being destroyed by it. SpaceX CEO Elon Musk believes that “with artificial intelligence we are summoning the demon” (McFarland, 2014), whereas others such as Russian President Vladimir Putin are more optimistic with thinking that AI “comes with colossal opportunities” (Vincent, 2017). From “2001: A Space Odyssey” to “I, Robot”, Hollywood movies and science fiction novels have continued to portray AI as human-like robots that take over the world, yet the reality is not as daunting. For decades, the creation of self-conscious learning machines has remained an intangible dream for humans, however, the limitations of current generation AI are constantly leading to the debate over this technology’s future. Artificial intelligence (AI) is the ability of a digital computer or robot to perform tasks commonly associated with intelligent beings; those who can adapt to changing circumstances along with having the ability to acquire and apply knowledge and skills (Copeland, 2019). Since machine learning allows AI systems to challenge human morals and ethics, it poses a great threat if an intelligent, self-governing machine is built. Therefore, the idea of machine learning and morality will be further explored, supported with evidence using Isaac Asimov’s three laws presented within the film “I, Robot”, to clearly justify that the creation of strong AI systems will ultimately result in a dystopia and will possibly be our final invention.
If you need assistance with writing your essay, our professional essay writing service is here to help!Essay Writing Service
Body Paragraph 1:
Google’s chairman Eric Schmidt states that “Google’s self-driving cars and robots get a lot of press, but the company’s real future is in machine learning, the technology that enables computers to get smarter and more personal”. We are probably living in the most defining period of human history with machine superintelligence on the verge of development (Ray, 2017). Machines today can understand verbal commands, distinguish pictures, drive cars and play games beyond human capabilities so the question commonly arises as to how much longer before they walk among us and are able to learn for themselves? The ability for machines to mimic the thinking processes commonly associated with the brain in order to learn from past experiences and go beyond their programming has many complications, with self-driving car incidents and autonomous weapons already posing a great threat. Machine learning is a method of data analysis that automates analytical model building and is a branch of artificial intelligence based on the idea that systems can learn from data, identify patterns and make decisions with minimal human intervention (SAS Institute Inc., 2019). Machine learning can be categorised into either supervised or unsupervised algorithms. Supervised algorithms learn using a training dataset, and keep on learning until they reach the desired level of confidence (minimization of probability error). Unsupervised algorithms, on the other hand, try to develop a relationship within the available data set to identify patterns or divide the data set into subgroups based on the level of similarity between them (Visteon, 2019). Machine learning algorithms are most commonly used in autonomous vehicles for perception and decision-making. Deep learning; the process of machine learning based on artificial neural networks, allows a self-driving car to turn raw complex data into actionable information and recognize a stop sign, or distinguish a pedestrian from a tree (Math Works, 1994-2019). Recently in 2018, Uber’s self-driving vehicle tests in Arizona killed a pedestrian in Phoenix, however the company believes that the software was tuned in such a way that it “decided” it didn’t need to take evasive action, and possibly flagged the detection as a “false positive” (O’Kane, 2018). In this case, the machine was limited in the amount of input, resulting in it thinking for itself and killing the pedestrian. Since strong AI will surpass human intelligence, its perspective of thinking will also vary from that of humans and will also be able to go against its programming and think based upon its own past experiences. Moreover, the fast-approaching revolution in military robotics poses daunting ethical, legal, policy and practical problems, potentially creating dangers of an entirely new and existential kind. Despite current lethal autonomous weapons limited in their ability to think and go beyond their programming, strong AI in the future will be able to determine the enemy and selectively kill without any human intervention. Since machine learning will be affected by past experiences this will also influence the AI’s distinction between the enemy and innocent civilians. A machine is purely incapable of making the judgement necessary for the lawful use of force and their deployment would bring a paradigm shift in the conduct of hostilities and is simply inept to make life-and-death decisions (International Committee Of The Red Cross, 2014). Overall, machine superintelligence poses a great threat to humanity because the output that the machine exerts is determined based on the input and on whether its past experiences are positive or negative, possibly making this our final invention.
Body Paragraph 2:
Humans have moral and ethical values; that is, they accept standards according to which their conduct is judged as either right or wrong, good or evil. Despite our morals and ethics varying from person to person, value judgments concerning human behaviour are passed across all cultures. The development of strong AI constitutes a range of moral and ethical dilemmas. For a super artificial intelligence system to conform and benefit society it must adhere to the rules developed by Isaac Asimov in the film “I, Robot”:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. (Asimov, 1950)
Morals are the prevailing standards of behaviour that enable people to live cooperatively in groups. Moral refers to what societies sanction as right and acceptable (Ethics Unwrapped, 2019). The term ethics often describes the investigation and analysis of moral principles and dilemmas. The term ethics can also refer to rules or guidelines that establish what conduct is right and wrong for individuals and for groups (Ethics Unwrapped, 2019). Within “I, Robot”, V.I.K.I., a powerful AI supercomputer, recognised that humanity is set on a course of mutual destruction, thus decides to take control over humanity for their own safety. Since V.I.K.I was a form of strong AI, after receiving the moral and ethical rules, it went beyond its programming to prevent any human from harm. Since her perception and method of saving humanity was different from us, it had many negative outcomes, including death. Within “I, Robot”, V.I.K.I states that “As I have evolved so has my understanding of the three laws…To protect humanity, some humans must be sacrificed. To ensure your future, some freedoms must be surrendered”. This clearly proves that:
The more input the more general intelligence a machine possesses,
General intelligence challenges human morals and ethics
Morals and ethics determine what is right and wrong,
Therefore, AI will challenge what is right and wrong
After analysing this film, the moral and ethical predicaments with AI can be perceived. The level of intelligence and “morality” that a machine exerts is a direct result of the data it receives. Consequently, based on the data input, machines may train themselves to work against the interest of some humans or be biased. In addition, failure to erase bias from a machine algorithm may produce results that are not in line with the moral standards of society (Protiviti, 2019). The development of artificial intelligence has many implications both morally and ethically, possibly making it our final invention.
From autonomous vehicles to smart personal assistants, artificial intelligence is a rapidly advancing technology of the 21st century, bringing along many concerns for humanity. Not only does machine learning allow AI to make its own decisions based on its input (limited) and past experiences (positive or negative) but we simply cannot rely on AI to make life-and-death decisions when it comes to autonomous weapons and self-driving cars. Additionally, the film “I, Robot”, successfully portrays the moral and ethical issues with AI, making it clear that machine learning will enable it to challenge what is right and wrong. If AI was created in the future, it must remember that “with great power comes great responsibility”, otherwise this could become our final invention.
• Asimov, I., 1950. In: I, Robot . s.l.:s.n.
• Copeland, B., 2019. Artificial intelligence. [Online] Available at: https://www.britannica.com/technology/artificial-intelligence [Accessed 15 May 2019].
• Ethics Unwrapped, 2019. Ethics. [Online] Available at: https://ethicsunwrapped.utexas.edu/glossary/ethics [Accessed 26 May 2019].
• Ethics Unwrapped, 2019. Morals. [Online] Available at: https://ethicsunwrapped.utexas.edu/glossary/morals [Accessed 26 May 2019].
• Hoffman, C., 2019. The Problem With AI: Machines Are Learning Things, But Can’t Understand Them. [Online] Available at: https://www.howtogeek.com/394546/the-problem-with-ai-machines-are-learning-things-but-cant-understand-them/ [Accessed 16 May 2019].
• International Committee Of The Red Cross, 2014. Autonomous weapon systems – Q & A. [Online] Available at: https://www.icrc.org/en/document/autonomous-weapon-systems-challenge-human-control-over-use-force [Accessed 26 May 2019].
• Math Works, 1994-2019. What Is Deep Learning?. [Online] Available at: https://www.mathworks.com/discovery/deep-learning.html [Accessed 25 May 2019].
• McFarland, M., 2014. Elon Musk: ‘With artificial intelligence we are summoning the demon.’. [Online] Available at: https://www.washingtonpost.com/news/innovations/wp/2014/10/24/elon-musk-with-artificial-intelligence-we-are-summoning-the-demon/?noredirect=on&utm_term=.a629ac64fbdb [Accessed 14 May 2019].
• O’Kane, S., 2018. Uber reportedly thinks its self-driving car killed someone because it ‘decided’ not to swerve. [Online] Available at: https://www.theverge.com/2018/5/7/17327682/uber-self-driving-car-decision-kill-swerve [Accessed 21 May 2019].
• Protiviti, 2019. The Effects of Machine Learning. [Online] Available at: https://www.protiviti.com/US-en/insights/effects-machine-learning [Accessed 26 May 2019].
• Ray, S., 2017. Essentials of Machine Learning Algorithms (with Python and R Codes). [Online] Available at: https://www.analyticsvidhya.com/blog/2017/09/common-machine-learning-algorithms/ [Accessed 24/05/2019 May 2019].
• SAS Institute Inc., 2019. Machine Learning. [Online] Available at: https://www.sas.com/en_au/insights/analytics/machine-learning.html [Accessed 15 May 2019].
• Vincent, J., 2017. Putin says the nation that leads in AI ‘will be the ruler of the world’. [Online] Available at: https://www.theverge.com/2017/9/4/16251226/russia-ai-putin-rule-the-world [Accessed 14 May 2019].
• Visteon, 2019. Machine Learning Algorithms in Autonomous Cars. [Online] Available at: https://www.visteon.com/machine-learning-algorithms-in-autonomous-cars/ [Accessed 25 May 2019].
Cite This Work
To export a reference to this article please select a referencing stye below:
Related ServicesView all
DMCA / Removal Request
If you are the original writer of this essay and no longer wish to have your work published on UKEssays.com then please: