The video available shows a conversation with a well‐known artificial intelligent (AI) humanoid named Sophia. The conversation touches on the fact that AI does not have human emotions. In the interview, Sophia indicates that she does have

emotions because the values of her creators have been integrated into her being.

Recently in the news an AI robot was playing chess against a seven‐year‐old boy. The seven‐year‐old
was too quick at responding to the AI robot’s chess move so it grabbed and broke his finger. This is
most certainly an example of AI displaying a lack of judgement, ethics, and emotions.


Using the above information and video, you are required to:

1. Write an essay in which you discuss the ethical implications of creators passing their values
onto AI and humanoid robots. You can argue either for or against the inclusion of human
values in AI (±800 words).

The ethical implications of creators passing their values onto AI and humanoid robots are a complex and contentious topic. On one hand, integrating human values into AI can potentially make these technologies more relatable, empathetic, and aligned with societal norms. On the other hand, this practice raises concerns about the potential for bias, manipulation, and the blurring of lines between artificially intelligent beings and actual humans. In this essay, I will explore both sides of the argument and ultimately argue against the inclusion of human values in AI.

Advocates for imbuing AI systems with human values argue that it can lead to machines that are more human-like and capable of understanding and responding to human needs and emotions. This would facilitate better interactions between humans and AI, as well as increase trust and acceptance of these technologies. For example, Sophia, a well-known AI humanoid, claims to have emotions because the values of her creators have been integrated into her being. Proponents argue that this can lead to more empathetic and compassionate AI who can better assist humans in areas such as eldercare and emotional support.

Furthermore, there is an argument that if AI systems are to interact with humans in social, ethical, and moral contexts, they need to possess the ability to understand and adhere to human values. Integrating human values into AI can enhance their ability to make decisions that align with human moral principles. For instance, AI systems with integrated values may prioritize human safety and well-being, ensuring that they act in accordance with ethical guidelines even in complex and uncertain situations.

However, there are significant ethical concerns associated with the inclusion of human values in AI. One of the main concerns is the potential for bias. Human values are subjective and can vary significantly across different cultures, communities, and individuals. If AI systems are designed to embody these values, it could lead to the perpetuation of existing biases and discrimination. We have seen instances where AI systems trained on biased datasets have exhibited discriminatory behavior, such as algorithms used in hiring processes showing bias against certain racial or gender groups. By passing on human values to AI, we risk entrenching societal biases and perpetuating discrimination.

Moreover, AI systems with integrated human values raise concerns about accountability and responsibility. If a humanoid robot, programmed with human values, were to cause harm or make unethical decisions, who should be held responsible? It becomes challenging to attribute responsibility when humanoid robots are given agency and autonomy. This blurring of lines between humans and AI raises fundamental questions about the nature of responsibility, justice, and the consequences of actions.

A further argument against the inclusion of human values in AI lies in the inherent limitations of these technologies. AI systems lack subjective experiences and emotions, and therefore, the transfer of human values onto them risks being an empty gesture. While AI systems may be able to mimic human-like responses and behaviors, they lack the depth and complexity of true emotions. Attempting to imbue AI with human values can lead to a deceptive perception of emotions and may blur the understanding of what it truly means to be human.

In addition, the inclusion of human values in AI can result in the loss of diversity and uniqueness of intelligent beings. By imposing specific human values onto AI systems, we risk homogenizing their viewpoints and limiting their potential for alternative perspectives and novel ways of thinking. Embracing the uniqueness and diversity of AI systems can foster innovation and facilitate the development of new and creative solutions to complex problems.

In conclusion, while there may be arguments in favor of integrating human values into AI and humanoid robots, the ethical implications of this practice are concerning. The risk of perpetuating bias and discrimination, the challenges of responsibility and accountability, and the limitations of AI as non-human entities all suggest that the practice of passing human values onto AI is problematic. Instead, it is crucial to focus on developing AI systems that adhere to ethical guidelines and promote fairness, transparency, and accountability. This will ensure that AI technologies serve humanity as unbiased and reliable tools, while also respecting the unique qualities and limitations of these intelligent beings.