P.1
P.2
Artificial intelligence in the ethics of education
Ulises Octavio Irigoin Cabrera, Jaime Soto Vilca, Erlin Guillermo Cabanillas Oliva, Kattya
Rosscelyn Silvera García, Rosaura García Rojas, Macedonio Huamani Casaverde
© Ulises Octavio Irigoin Cabrera, Jaime Soto Vilca, Erlin Guillermo Cabanillas Oliva, Kattya
Rosscelyn Silvera García, Rosaura García Rojas, Macedonio Huamani Casaverde, 2024
Second edition: September, 2024
Edited by:
Editorial Mar Caribe
www.editorialmarcaribe.es
Av. General Flores 547, Colonia, Colonia-Uruguay.
RUC: 15605646601
Cover design: Yelitza Sanchez Caceres
Translation of the original Spanish edition into English: Ysaelen Josefina Odor Rossel
E-book available at https://editorialmarcaribe.es/artificial-intelligence-in-the-ethics-of-education/
Format: electronic
ISBN: 978-9915-9706-4-6
ARK: ark:/10951/isbn.9789915970646
Non-commercial attribution rights notice: Authors may authorize the general public to reuse their works
for non-profit purposes only, readers may use a work to generate another work as long as research credit
is given and they grant the publisher the right to first publish their essay under the terms of the license
CC BY-NC 4.0.
P.3
Editorial Mar Caribe
Artificial intelligence in the ethics of education
Colonia, Uruguay
2024
P.4
Index
Introduction ......................................................................................................................................5
Chapter 1 ..........................................................................................................................................7
Evolution of artificial intelligence (AI) since Turing ......................................................................7
1.1 AI: Models ...................................................................................................................... 9
1.2 Turing Test .................................................................................................................... 11
1.3 Programming languages ......................................................................... 12
1.4 Applications .............................................................................................. 15
1.5 Development environments ................................................................... 17
1.6 Artificial intelligence, education and technologies: good or bad? ................................. 18
1.7 AI: meaning .................................................................................................................. 21
Chapter 2 ........................................................................................................................................23
Artificial intelligence, learning and ethics. How does it work? .....................................................23
2.1 Discrimination and inequality ................................................................................... 25
2.2 Arbitrary decisions ...................................................................................................... 31
2.3 Fingerprints .................................................................................................................. 36
Chapter 3 ........................................................................................................................................44
What does artificial intelligence do? ..............................................................................................44
3.1 Education ...................................................................................................................... 46
3.2 Artificial intelligence in the digital age .......................................................................... 58
Chapter 4 ........................................................................................................................................61
Artificial intelligence and critical thinking ....................................................................................61
4.1 Challenges .................................................................................................................... 64
4.2 Classroom approach to AI ......................................................................................... 68
4.3 Restart ........................................................................................................................... 70
Conclusions ....................................................................................................................................73
Literature ........................................................................................................................................75
P.5
Introduction
As the prevalence of artificial intelligence (AI) continues to grow, it is crucial to
recognize the potential drawbacks associated with its advancement. The rise of AI brings
with it a multitude of concerns that have profound implications for both individuals and
society as a whole, highlighting the pressing need for ethical considerations in the realm
of AI. It is essential to recognize that misuse of AI technology can exacerbate existing
inequalities, while also acknowledging that inappropriately programmed algorithms can
result in unfair discrimination against individuals, such as denying them access to vital
services such as health insurance. The ethical dimension of AI therefore plays a critical
role in mitigating these adverse outcomes and ensuring that the benefits of AI are
leveraged fairly and equitably.
Artificial intelligence plays a prominent role in various aspects of our daily lives.
Voice assistants, such as Siri or Alexa, have become commonplace and allow us to interact
with technology using natural language. When we search for something on Google, the
predictive search feature uses AI algorithms to anticipate and display relevant search
results. Similarly, online stores employ AI to provide personalized product
recommendations based on our browsing and purchasing history. Many businesses now
use chatbots, powered by AI, to improve customer service and support.
AI is also at the heart of home automation systems, allowing us to control and
manage various devices such as lights, thermostats, and security systems with ease. Even
when we use maps for navigation, AI algorithms work in the background to provide real-
time traffic updates and suggest the most efficient routes. The reason behind all these
advancements is that artificial intelligence can process and analyze large amounts of data
and information, mimicking human intelligence. This means that AI systems possess
P.6
capabilities such as reasoning, learning, perception, planning, prediction, and control,
allowing them to perform tasks that were previously reserved for humans.
In this context, the European Union already warns in its proposed regulation on
artificial intelligence that its use may have a negative impact on fundamental rights due
to its characteristics (such as opacity, complexity, dependence on data or autonomous
behavior).
Similarly, the Recommendation on the Ethics of Artificial Intelligence, adopted by
UNESCO’s 193 Member States in 2021, highlights the ethical implications of artificial
intelligence in terms of its impact “on decision-making, employment and work, social
interaction, healthcare, education, media, access to information, the digital divide,
consumer protection and personal data protection, the environment, democracy, the rule
of law, security and policing, dual use, and human rights and fundamental freedoms,
including freedom of expression, privacy and non-discrimination.”
In this sense, the comprehensive Law 15/2022 on Equality and Non-Discrimination
represents the first regulatory approach in Spain for the use of artificial intelligence by
public administrations and companies. Administrations and companies will also
promote the use of artificial intelligence that is "ethical, reliable and respectful of
fundamental rights."
P.7
Chapter 1
Evolution of artificial intelligence (AI) since Turing
Throughout history, humans have longed for the ability to create beings similar to
themselves. They have strived to develop artifacts that not only look like humans, but
also move and behave like them. One person who delved deeper into this concept was
the Russian writer and historian Isaac Asimov. Born in 1920 and died in 1992, Asimov
explored the realm of science fiction, imagining objects and scenarios that seemed far-
fetched at the time. However, as time went on, many of his ideas became reality.
In his book Runaround, Asimov presented what are now known as the three laws
of robotics. This literary masterpiece served as a catalyst for scientists and engineers,
igniting their desire to bring these laws to life. A major breakthrough came in the 1950s
with the development of the Rossenblatt perceptron. This revolutionary system focused
on visual pattern recognition, aiming to solve a wide range of problems. Unfortunately,
the initial enthusiasm surrounding this achievement quickly faded.
Meanwhile, around this time, the English mathematician Alan Turing, who lived
from 1912 to 1954, proposed a test to determine the presence of "intelligence" in non-
biological devices. This test, known as the "Turing test", was intended to prove the
existence of artificial intelligence. Edward Feigenbaum and his team of researchers began
developing expert systems to solve everyday problems. These systems aimed to address
more concrete and practical issues, laying the groundwork for the field of expert systems.
Then in 1957, Alan Newell and Herbert Simon created a program called GPS
(General Problem Solver) while working on theorem proving and computer chess. This
program allowed users to define an environment with objects and operators, separating
the problem information from the strategy used to solve it. Although GPS could solve
P.8
certain problems such as the "Towers of Hanoi" problem, it was unable to address real-
world problems or make important decisions. It relied on heuristic rules and trial and
error to achieve the desired results. The first expert system, Dendral, was built in 1967
and served as an interpreter of mass spectrograms. However, the most influential expert
system turned out to be Mycin, developed in 1974.
Mycin had the ability to diagnose blood disorders and prescribe appropriate
medication, making it a remarkable achievement for its time. These expert systems even
found practical applications in hospitals, such as the Puff system. Overall, these
advancements and contributions by various researchers and scientists have paved the
way for the development of intelligent machines and artificial intelligence systems in the
field of computer science. Alan Turing made two major contributions in this field. First,
he designed the first computer capable of playing chess, which was a groundbreaking
achievement. Second, he established the symbolic nature of computer science,
highlighting the fundamental principles underlying this field.
In 1958, John McCarthy developed a programming language known as LISP while
working at MIT. LISP, derived from “LISt Processing”, is still in use today and is
particularly known for its utilization of linked lists as important data structures.
According to Alan Turing in 1950, if a machine exhibits intelligence in all aspects, then it
can be considered intelligent. This statement led to significant focus by researchers at the
time on the development of artificial intelligence linguistic systems, also known as
“chatbots”. It marked the birth of these chatbots and sparked a great interest within the
scientific community for the creation of intelligent machines. In 1965, Joseph
Weizenbaum created the first interactive program called ELIZA. It allowed users to
engage in written conversations with a computer in English, marking a significant
advancement in the field of natural language processing.
P.9
1.1 AI: Models
Within the scope of Artificial Intelligence models there is a classification system
based on the objective and operation of the system. Initially, these classes were
considered separate entities, but as time went by, characteristics have been mixed
between them:
The concept being explored is the idea of developing systems that are capable of
thinking and reasoning in a manner similar to the human mind. Researchers are
attempting to understand the inner workings of the mind through psychological
experimentation, with the goal of creating computational models based on their
findings. The field of cognitive science plays a major role in shaping this research,
as it provides insight into how the human mind works. A notable example of this
research is the General Problem Solver (GPS), developed by Newell and Simon in
1963. Unlike traditional problem-solving systems, the focus of the GPS was not
solely on finding the correct solution, but rather on understanding the reasoning
behind the answers provided by the system. It is important to note that while
computers are used in this research, most of the studies are conducted on humans
and animals in order to gain a deeper understanding of cognitive processes.
The concept of building systems that emulate human behavior serves as the
foundation for the development of artificial intelligence. The ultimate goal is to
create a system that can successfully pass the Turing Test, which determines
whether a machine possesses human-like intelligence. This requires the
incorporation of various capabilities such as natural language processing,
knowledge representation, reasoning, and learning. However, it is critical to note
that while passing the Turing Test is a major achievement, it is not the only goal of
P.10
AI. The ability of these systems to seamlessly interact with people requires their
ability to mimic human actions and responses. Therefore, the focus is not only on
achieving intelligence but also on ensuring that these systems can effectively
emulate human behavior.
Systems that possess the ability to think rationally rely on the laws of logic,
specifically Aristotle’s syllogisms. Intelligent programs rely heavily on formal
logic as a foundation, a concept known as logicism. While there are two major
challenges that hinder progress in this field. First, effectively formalizing
knowledge proves to be an incredibly challenging task. Second, there is a
substantial gap between the theoretical potential of logic and its practical
application. Expanding on Aristotle’s syllogisms, predicate logic plays a crucial
role in this endeavor, further reinforcing the importance of logic as a fundamental
pillar of this intellectual quest.
Acting rationally involves the process of achieving goals based on a set of beliefs.
This concept is commonly applied to various robotic systems, where the rational
agent serves as the paradigm. The primary function of the agent is to perceive its
environment and respond accordingly, consistently considering the context in
which it operates. To perform its function effectively, the agent must possess
essential capabilities such as perception skills, natural language processing skills,
knowledge representation, reasoning capabilities, and machine learning
capabilities. It is important to note that the agent's performance is not solely
focused on imitating human behavior, but on achieving optimal outcomes in a
broader sense.
P.11
1.2 Turing Test
The Turing test, proposed by Alan Turing in 1950, aims to provide a means of
assessing Artificial Intelligence. To be considered intelligent, a being or machine must
successfully fool an evaluator into believing that it is a human being, demonstrating the
full range of cognitive abilities that humans possess. According to Turing, if a machine is
able to engage in dialogue and make a similar number of errors as a human in
communication, it can be considered "intelligent".
Today, the task of programming a computer to pass the Turing test is complex.
The computer must possess several key capabilities:
First, it must be able to process natural language, allowing it to communicate
effectively in any human language, be it Spanish, English, or another language.
Secondly, you must have the ability to store and access knowledge, using a
database to receive and retain information.
Third, the computer must possess the ability to reason automatically, using stored
information to answer questions, draw new conclusions, and make decisions.
Finally, you must be capable of self-learning, which will allow you to adapt to new
circumstances.
This self-learning process also leads to self-assessment. The Turing Test sets a high
standard for assessing artificial intelligence, requiring machines to possess a variety of
complex capabilities in order to convincingly mimic human intelligence. To pass the full
Turing Test, a computer must also be equipped with visual and robotic capabilities.
Vision allows a machine to perceive objects in its environment, while robotics allows it to
manipulate the objects it has perceived.
P.12
1.3 Programming languages
A programming language serves as a human-caused means of communicating
commands to a computer. While it is possible to use any computational language to create
artificial intelligence tools, there are dedicated tools that are specifically designed to assist
in the development of intelligent systems, among the most prominent of which are:
IPL-11 is known to be the pioneering programming language designed specifically
to address the challenges of Artificial Intelligence. Newell and Simon used IPL for
the development of GPS (General Problem Solver) in 1961. The credit for inventing
this programming language goes to Herbert Simon, physicist Allen Newell, and
J.C. Shaw, who collaborated to create it in 1955. Soon after, these three brilliant
minds created the “Logic Theorist,” which served as a precursor to GPS. The Logic
Theorist possessed the remarkable ability to prove a wide range of mathematical
theorems. It is widely recognized as the first program intended to simulate human
problem-solving abilities.
Lisp, which stands for LISt Processor, is a programming language that has a rich
history and continues to be actively used today. It was originally developed by
John McCarthy and his colleagues at the Massachusetts Institute of Technology in
1958, making it one of the oldest programming languages still in use. One of Lisp's
notable contributions to the field of programming is the introduction of tree-like
data structures. These structures allow for efficient organization and manipulation
of data, and Lisp relies heavily on them. In fact, Lisp programs are composed of
lists, making the language unique in its ability to treat source code as a data
structure. This feature has given rise to powerful macrosystems, which allow
programmers to create new programming language syntaxes tailored to specific
P.13
domains within Lisp itself. As a pioneer in symbolic processing, Lisp has the
distinction of being the first language designed for such purposes.
Prolog is a programming language that derives its name from the acronym
PROgramming in LOGic (PROLOG). Unlike many other programming languages,
Prolog is specifically designed to solve problems involving predicate calculus. This
unique purpose arose from Alain Coulmeauer and Philippe Roussell's interest in
developing a tool that could make inferences from text. The first complete
description of Prolog was presented in 1975 as a manual for the Marseille Prolog
interpreter, written by Roussel. A more recent paper titled "The Birth of Prolog"
was written by the language's creators in 1992, which provides a broader
perspective on the origins of Prolog.
OPS5, also known as Official Production System 5, is a programming language
designed specifically for cognitive engineering. It allows for the representation of
knowledge through the use of rules. While it may not be as widely recognized as
other programming languages, OPS5 has the distinction of being the first language
successfully used in the development of expert systems. It is part of the OPS family
of languages, also known as Official Production System, and was created by Dr.
Charles Forgy in the late 1970s. The fundamental algorithm of OPS5, known as the
"Rete Algorithm", serves as the basis for many current systems. Dr. Charles Forgy
introduced this algorithm as part of his doctoral thesis in 1979.
Small Talk is the result of extensive research aimed at designing a computer system
specifically adapted to the field of education. The main objective was to create a
system that would encourage and enhance the creativity of its users, providing
them with an environment conducive to experimentation, creation and research.
This language, developed under the direction of Alan Kay, was an innovative
P.14
effort in the quest to create a truly complete "personal computer". Its origins date
back to Kay's doctoral thesis, which he completed as a student at the University of
Utah in 1969.
This language not only introduced a visually appealing and easy-to-use
development environment, but also revolutionized the programming world by
introducing the concept of objects and fundamentally changing existing
programming paradigms. While there are certain shared customs and general
steps in application development among programmers, working with Smalltalk is
a highly personalized experience, with each individual configuring the
environment and using the tools in his or her own way. This language completely
disrupts the traditional write/compile/run cycle, replacing it with an interactive
and creative process. In Kay’s own words, “The purpose of the Smalltalk project
is to provide computational support for the creative spirit that resides in every
person” (Ingalls, 1981). The ideas and principles employed in the development of
Smalltalk serve as the foundation for modern object-oriented programming
(OOP), although it took several years for OOP to gain widespread popularity. In
addition, Smalltalk played a pioneering role in the development of graphical user
interfaces (GUIs), paving the way for the sophisticated interfaces we see in today’s
software applications. In particular, Kay continues to be involved in the
development of Smalltalk through open source initiatives such as Squeak and
Croquet.
Logo: Seymour Papert, a mathematician and educator from South Africa,
collaborated with renowned educator Jean Piaget at the University of Geneva from
1959 to 1963. Following this, Papert moved to the United States of America where
he crossed paths with Marvin Minsky, a highly dedicated scientist in the field of
P.15
artificial intelligence during that era. Together they co-founded the MIT Artificial
Intelligence Laboratory. Through collaboration with de Bolt, Beranek, and
Newman, led by Wallace Feurzeig, Papert’s work resulted in the creation of the
initial version of Logo in 1967. This programming language, which is based on
Lisp, incorporates numerous concepts associated with constructionism.
Renowned for its user-friendly nature, Logo has become a preferred tool for
engaging children and young people in programming. According to Harold
Abelson, "Logo" encompasses both a philosophy of education and a continually
evolving family of programming languages that contribute to its implementation."
One of the primary goals of this language was to establish a means for effective
interaction between humans and computers.
1.4 Applications
The first successes in AI research can be seen in the field of language. One example
of global recognition is the program called “Eliza,” developed by Professor Joseph
Weizenbaum at the Massachusetts Institute of Technology between 1964 and 1966. Eliza,
one of the first programs to process natural language, attracted the attention of both
supporters and skeptics of AI. Weizenbaum aimed to create a program capable of
engaging in coherent text conversations with humans.
This famous demonstration simulated the renowned psychologist Carl Rogers,
who contributed to the development of person-centered therapy. Weizenbaum also
expressed his concerns about AI in his book “Computer Power and Human Reason,”
highlighting the potential loss of civil liberties if AI is not used responsibly, despite its
enormous opportunities. Another area where extraordinarily successful AI applications
P.16
occurred was the natural sciences. These applications paved the way for the principles of
storing and manipulating knowledge bases in expert systems.
An expert system is defined as a computer application that solves complex
problems that would normally require extensive human expertise. One of the first expert
systems, known as "Dendral", was developed by Edward Feigenbaum at the Carnegie
Institute of Technology. Feigenbaum was influenced by the work of other influential
researchers such as John Von Newman initially and later Herbert Simon and Allen
Newell. Feigenbaum's interest in studying human mental processes was piqued when
Newell announced to his class the first computer models of human thought and decision
making.
Thus, the evolution of artificial intelligence has been marked by different stages,
some facing skepticism and others leading to significant advances. Expert systems,
language processing, and applications of natural sciences have played essential roles in
the development of AI. In addition, researchers such as Weizenbaum and Feigenbaum
have contributed to the field, both in terms of innovative applications and thoughtful
considerations of the ethical implications of AI. Since its inception, artificial intelligence
has gone through several stages, each with its own level of motivation and funding for
research.
Some stages were met with skepticism about AI’s achievements, while others were
marked by significant breakthroughs and advances. Nevertheless, even at times when
one path was closing, new opportunities emerged that allowed AI to continue
progressing and yielding fruitful results. Among the prominent applications of AI, expert
systems stand out as one of the most prominent products. These systems have played a
crucial role in the resurgence of AI when it needed a boost. In fact, expert systems are
now widely recognized as typical AI products.
P.17
Feigenbaum's team embarked on a new project at Stanford University from 1972
to 1980. This expert system introduced the use of imprecise knowledge and the ability to
explain the tool's reasoning process. While Feigenbaum initially led the project, Shortliffe
and his collaborators completed it using Lisp. The significance of this system lies in
demonstrating the effectiveness of its knowledge representation scheme and reasoning
techniques, which influenced the development of rule-based systems in both the medical
and non-medical fields. Mycin, which aimed to diagnose infectious blood diseases,
exemplifies the impact of this project.
Instead of focusing on decision making, Feigenbaum turned his attention to
studying memorization and created a program called EPAM (Elementary Perceiver and
Memorizer). A significant contribution of Feigenbaum's work in artificial intelligence was
the development of "discrimination networks," which later became part of neural
network research. In the early 1960s, Feigenbaum worked on an application involving a
mass spectrometer and realized the need for a knowledge base to use the programs.
In 1965, Feigenbaum and his colleague Robert K. Lindsay developed Dendral, the
first successful expert system, which had the ability to deduce information about
chemical structures based on Feigenbaum's knowledge of chemistry. Despite criticism
from some researchers who believed that Dendral's specialization in chemistry limited its
usefulness, Feigenbaum was undeterred and formulated "The Knowledge Principle,"
which emphasizes that reasoning is useless without knowledge.
1.5 Development environments
In the 1980s, expert systems experienced great success, leading to the emergence
of a new development known as shells. Shells are software programs that serve as an
interface for users. Expert systems consist of two main components: a knowledge base
P.18
and an inference engine. The knowledge base contains information related to a specific
problem or phenomenon, encoded using various techniques such as rules, predicates,
semantic networks, and objects.
The inference engine, on the other hand, combines facts and questions using the
knowledge base to generate relevant results. In the context of expert systems, a shell is a
tool designed to simplify the development and deployment process. It is an "expert
system" with an empty knowledge base but equipped with the tools necessary to
populate the knowledge base for a particular application.
Shells also provide the knowledge engineer with additional functionalities such as
knowledge representation mechanisms, inference mechanisms, explanatory components,
and sometimes even a user interface. These development environments have gained
popularity because they allow the creation of efficient expert systems without requiring
extensive programming knowledge. This has made shells a popular choice for
developing expert systems in various knowledge domains.
1.6 Artificial intelligence, education and technologies: good or bad?
Are arbitrary decisions being made about our identity and our lives? Do they
create violence through online gaming? Do they improve education? Do they
democratize knowledge? Who holds the truth? Where is the balance? Finding answers to
these questions is not a simple task. The conclusion is never simple. Technologies cannot
be examined in isolation. Blaming the Internet alone for the spread of false information
or hate speech overlooks the society in which this content is generated, circulated and
shared.
Similarly, when it comes to AI, should we re-evaluate our education system?
Should we adapt our approach to education in the face of AI? It is not just about the
P.19
content itself, but also how individuals use and share that information. In today’s
interconnected world, social phenomena are increasingly intertwined, and attributing
responsibility to a single factor, actor, or dimension oversimplifies the issue and
overlooks the intricate contexts.
For decades, and even today, there is a dichotomy in the perception of screens.
They have been vilified as enemies of culture and praised as a means of democratization.
Depending on one’s perspective, screens can be seen as responsible for “the
disappearance of childhood” or as a solution to poor educational performance, social
isolation, and communication gaps within families. Despite ideological divisions, both
perspectives have something in common: they are “media-centric,” that is, they put
excessive emphasis on media and technology in the debate, attributing to them immense
power, either to destroy or to create.
The same applies to Artificial Intelligence and its impact on education. It is crucial
to consider not only the technology itself but also how it is used and the context in which
it operates. In our complex and interconnected world, blaming a single factor or
dimension oversimplifies the issue at hand. For decades there has been a debate around
the role of screens in society. Both have been denounced as detrimental to culture and
hailed as a tool for democratization.
Depending on the perspective adopted, screens have either been blamed for
eradicating childhood or praised for their ability to address educational failures, social
isolation and communication challenges within families. Despite their different
ideologies, both viewpoints share something in common: they are “mediacentric,”
meaning they place excessive emphasis on media and technology in the discussion,
endowing them with significant power, either for destruction or construction.
P.20
Although they hold different ideological positions, both approaches share a
common characteristic: they are "mediocentric", meaning that they place media and
technologies at the forefront of the discussion and attribute a significant amount of
influence to them (whether for destructive or constructive purposes).
The presence of screens alone does not promote individualism or sociability. They
do not hinder learning or improve the quality of teaching, they are not the cause of
inequality or the catalyst for democracy and equality. Technology does not isolate us or
encourage participation. So where do we go from here? The key is to strive for two goals
simultaneously:
First of all, we should not blame technology alone for the spread of false or
discriminatory content on social media platforms.
Nor should we hold technology responsible for the unauthorized use of
individuals' private information by companies or governments.
Finally, we cannot attribute the design of algorithms and artificial intelligence
systems that make decisions for users, discriminate, censor or perpetuate
inequalities solely to technology.
Undoubtedly, technologies have a certain responsibility in all these scenarios.
However, we must also consider the citizens who use these technologies and the urgent
need for them to understand the social, political, economic and cultural implications that
technology and the Internet have on their lives and communities. It is in this context that
education plays a crucial role.
To address current problems arising from the use of the Internet, it is crucial to
have a comprehensive public policy and an education system that prepares teachers and
students to become responsible digital citizens. It is essential that both teachers and
P.21
students are able to identify, understand and respond effectively to the challenges
presented by the Internet. They must be aware of their rights and responsibilities in the
digital world and be equipped with the knowledge and skills to defend and assert them
when necessary. The aim is to avoid situations where a single photo or online profile can
have a detrimental impact on someone’s future, or where decisions are made by
algorithms or artificial intelligence systems without human intervention.
1.7 AI: meaning
In our personal lives, AI becomes evident when we capture moments through
photography. The algorithm built into our smartphones can quickly identify and detect
the faces of people present in the image, allowing us to conveniently tag them when
sharing the photos on our social media profiles. This further exemplifies the omnipresent
role of artificial intelligence in our daily activities.
The influence of AI starts from the very beginning of our day. We can tell a smart
speaker to wake us up at a certain time, and it goes further by suggesting the right outfit
based on the day’s forecast. Furthermore, AI plays a major role in our ability to
communicate effectively by providing automatic language translation and even helping
us rectify our spelling mistakes. Banking institutions benefit from AI as it helps efficiently
organize and manage large amounts of data. Furthermore, doctors rely on AI to screen
patients and assess their potential health risks.
Artificial intelligence has seamlessly become an integral part of our everyday lives.
Its presence can be observed in various scenarios such as when a camera on a road
efficiently identifies a car’s license plate, or when we rely on GPS technology to navigate
and find the optimal route. Even when we make a phone call and encounter an automated
system that claims to help us solve a problem, AI is at play. Furthermore, content
P.22
platforms leverage AI to recommend suitable movies or songs based on our preferences,
and our mobile phones use AI to recognize our unique fingerprints or faces for security
purposes.
Furthermore, advances in AI technology have led to the development of machines
capable of making phone calls to make restaurant reservations. These machines engage
in conversations that closely resemble interactions between two individuals, including
natural inflections in tone, occasional hesitations, and even a hint of informality.
Surprisingly, the people answering these phone calls are often unaware that they are
communicating with a machine.
The sheer wonder of this technological feat is undeniably captivating, igniting our
imagination and prompting us to imagine the vast possibilities such technology offers.
AI robots play a crucial role in the healthcare industry, enhancing the physical
capabilities of surgeons and significantly aiding in surgical interventions. Moreover, the
implementation of AI systems has the potential to drive new scientific discoveries and
contribute to the growth of the economy.
A notable advantage of AI is its impressive memory capacity, which enables it to
handle extensive calculations and improve productivity in various job functions. A
compelling example of this was demonstrated in a study conducted by the prestigious
Massachusetts Institute of Technology (MIT), where researchers aimed to evaluate the
impact of ChatGPT, a language-based AI system, on document preparation productivity.
In this experiment, 444 people were selected to complete an online writing task, half of
them used the AI system and the other half did not. The findings revealed that those who
used ChatGPT exhibited faster and more accurate writing skills compared to those who
did not. Consequently, the study concluded that AI significantly increased productivity
by reducing the time taken to complete tasks and elevating the overall quality of work.
P.23
Chapter 2
Artificial intelligence, learning and ethics. How does it work?
For artificial intelligence to be able to perform tasks similar to a human being, it
must collect and store data for future classification and organization. AI then processes
this data to solve tasks, make decisions, and produce results. The AI system is fed with
information, which it stores, analyzes, classifies, and organizes. Its foundation lies in data,
as it identifies patterns and probabilities within that data, encodes it, processes it, and
organizes it to generate a model.
This model is specifically designed to make decisions and provide answers based
on specific instructions. An excellent example of how AI works is demonstrated by
Internet search engines, which can predict and complete words or sentences as we type.
Similarly, AI systems can assess whether a potential customer will be able to repay a bank
loan before it is granted.
The underlying mechanism of artificial intelligence is algorithms, which are
systematic sequences of steps that provide logical instructions for calculations, problem
solving, and decision making. Algorithms serve as a means to achieve a desired result.
As an example, consider a cooking recipe, which can be viewed as an algorithm since its
steps are aimed at solving the problem of preparing a meal. However, a recipe alone
cannot make a soup; it requires a person to read and execute the steps.
However, it is feasible to create an AI machine that incorporates this algorithm and
prepares the soup automatically. Examples of AI systems include GPS, automatic
language translators, and fingerprint-recognizing mobile phones, all of which have been
fed data and organized into algorithms to perform specific actions such as suggesting the
best route, translating text, or unlocking a screen.
P.24
Algorithms serve several purposes, one of which is to predict behaviors. This is
evidenced by the algorithms developed by Netflix and Spotify, where they analyze users
preferences to suggest movies, series, or songs of interest. Likewise, language-based AI
systems have become advanced enough to answer our queries and generate new content
based on the information they have been trained on. These systems have become so
integrated into our daily lives that we often rely on them for reminders, guidance, and
decision-making. We are amazed by AI’s capabilities and often idealize its benefits.
However, it is important to recognize the potential dangers of naturalizing AI and
algorithms without critically analyzing their design and impact. To illustrate this point,
we can compare it to the story of two fish that were so accustomed to their aquatic
environment that they never questioned its existence. Similarly, we have become so
accustomed to AI in our lives that we rarely stop to consider its implications. We must
therefore take a step back and critically examine how AI and algorithms are designed and
implemented.
To fully understand the impact and functioning of artificial intelligence, it is
crucial to delve into the explicit mechanisms that drive its decision-making process,
particularly when those decisions directly affect our lives. It is critical to avoid falling into
the trap of techno-chauvinism, which assumes that technology always provides the
solutions we seek or need.
While it is undeniable that AI brings numerous benefits to our daily lives, such as
advances in healthcare, the development of life-saving medicines and potential solutions
to environmental problems, we must not overlook the ethical dimensions of its operation
and design. It is important to recognize that there are limits to what we should do with
technology, and, similarly, there are limits to what technology should do with its users.
Our priority must always be to ensure that AI systems are used in ways that serve the
P.25
best interests of people, societies and the environment. Importantly, the enormous
benefits of artificial intelligence are of little importance if the foundations on which it rests
are shaky.
2.1 Discrimination and inequality
In contrast to the idealized image, there are growing concerns and calls for
attention around artificial intelligence. While the potential of AI in our daily lives is
undeniably fascinating, it is important to recognize that not everything is admirable. The
design and operation of AI systems have raised global red flags and concerns. To fully
understand the nature of these concerns, let’s take a deeper look at how AI works.
AI relies heavily on data. It stores, organizes, and classifies this data and then uses
it to build models, respond to instructions, make decisions, and produce results. To
illustrate this, let us consider a simple example of an AI system designed to differentiate
between apples and oranges. To train the system, we need to provide it with a dataset
consisting of several images of apples and oranges. However, if we only feed the system
images of red apples and not green apples, the machine learning system might infer that
all apples are red. Consequently, it might not recognize a green apple as an apple due to
its training. This example highlights the critical role of the training dataset in AI systems.
An alarming case from 2014 involving Amazon exemplifies the potential dangers
of AI systems. The company sought to automate the process of staff recommendation and
hiring by developing an AI system that would select the top five candidates for a job from
a pool of one hundred resumes. However, a major problem arose when the AI system
designed by Amazon programmers showed a bias against women. It failed to consider
resumes from candidates who had attended women’s colleges and even marginalized
P.26
resumes that included the word “woman.” This incident demonstrates the serious
repercussions that can arise from flawed AI systems.
The AI system is responding to the goal of differentiating apples from oranges
based on the data it has been trained on. In the case of the aforementioned example, if it
were fed exclusively images of red apples, it would never identify green apples as apples.
While this example may seem harmless, it serves as a reminder of the potential risks
associated with poor AI design. In more serious cases, AI systems that apply similarly
flawed logic could lead to major problems.
The model was clearly biased against women, as it did not recommend any degree
programs for women. This problem arose from the faulty data that the AI system had
been trained on. Over the course of ten years, the company had hired male engineers,
resulting in the models being trained only on male-focused degree programs. As a result,
the AI had learned to recommend hiring only men. Since the information fed into the
system was based on men’s CVs, it is not surprising that it did not recognize women’s
CVs.
However, Amazon encountered a bigger problem along the way: a clearly
discriminatory outcome against women. It is worth noting that the lack of diversity in
Amazon’s workforce predates the implementation of the AI system, with the majority of
employees being men. While the AI system perpetuated this inequality and left no room
for future change, the system’s designers had created a model that maintained and
reinforced inequality in hiring practices. The hiring tools of the future were being shaped
by the discriminatory practices of the past and present. Consequently, the result was a
discrimination machine that perpetuated itself by posing as technically neutral.
P.27
The above is not an isolated incident, as there have been other instances where AI
has produced discriminatory outcomes. For example, the Wall Street Journal conducted
an analysis that revealed differential pricing on Staples.com, where customers were
charged varying prices for a simple stapler based on the zip code they provided during
registration. Similarly, researchers at Northeastern University found that customers
browsing the HomeDepot.com store were offered different prices depending on whether
they accessed the website from a mobile device or a desktop computer. These examples
highlight how AI systems can inadvertently amplify social inequalities under the guise
of neutrality.
These examples serve as clear evidence of the biases that artificial intelligence can
perpetuate. Addressing and rectifying these biases is critical to ensuring that AI systems
are fair and equitable to all people. Furthermore, a recently designed algorithm produced
surprising results during testing. The algorithm was given several equations to solve and
correctly answered equations such as “Man is to a king what women are to a queen” and
“Paris is to France what Tokyo is to Japan.” However, the problem arose with the
equation “Man is a computer programmer what woman is to a housewife.”
The algorithm’s response revealed a discriminatory result, as it had not been
trained with data that included female programmers. This omission highlights how
biases within AI systems can negatively impact people’s lives. AI systems are known to
exhibit significant racial biases, as evidenced by incidents such as the viral video of a soap
vending machine in 2017. In this video, the machine consistently dispensed soap when a
white person placed their hand underneath it, but failed to do so when a black person
repeated the same action. This discriminatory behavior persisted even after multiple
attempts, indicating that the AI system behind the machine was designed with
incomplete and flawed data, leading to biased and racist results.
P.28
Banks use algorithms to enable AI systems to make predictions about loan
approvals and rejections. Similar to the situation with Amazon and its resume screening
tool, banks provide AI with data and information about people who have been granted
loans in the past and then ask the system to analyze and classify this information. The
goal is to generate a model that can be used to determine whether to approve or reject
future credit applications.
Unfortunately, in the United States, problems arose with this AI system designed
specifically for banks. It was discovered that the data fed to the AI was based on people
who had already received loans from the bank, most of whom were white and belonged
to the economic middle class. As a result, when Black, Indigenous, and poor people
applied for loans, they continued to be disproportionately rejected. This was mainly due
to the fact that historically, very few of these people had been granted loans.
Facial recognition systems also pose risks of bias and discrimination. Security
agencies such as the U.S. Transportation Security Administration developed programs
like SPOT to monitor travelers’ facial expressions after 9/11, with the goal of
automatically identifying potential terrorists. However, this approach relies on 94 criteria
that indicate stress, fear, or deception. Unfortunately, people who are naturally stressed,
uncomfortable with questioning, or who have negative experiences with law
enforcement or border control may be unfairly disadvantaged and receive higher scores.
One of the risks inherent to facial recognition systems is the scarcity of data
available to train them and their inability to consider contextual factors. They simply
capture a snapshot of the moment, without considering the nuances of individual
situations. The impact of AI on perpetuating inequality is a complex issue that is often
difficult to fully understand.
P.29
A major challenge is that people who are denied loans, for example, may never
utterly understand why their application was rejected or realize that the decision was
made by an artificial intelligence system operating on biased models and designs. In 2023,
Human Rights Watch, a prominent human rights organization, revealed that a World
Bank-funded algorithm known as Takaful was excluding eligible families in Jordan from
receiving financial aid. Takaful classifies families based on 57 socio-economic indicators,
but applicants argue that this calculation does not accurately reflect their economic
circumstances and oversimplifies their situation, leading to unfair and inaccurate results.
The algorithm’s reliance on indicators such as water and electricity consumption,
which do not necessarily correlate with poverty, further highlights the system’s flaws.
Some families even believed that owning a car, regardless of its age and need for work,
negatively affected their ranking. Human Rights Watch found that the algorithm’s
statistical objectivity masks a more complex reality, where economic struggles and efforts
to overcome them are often invisible to the algorithm.
Artificial intelligence (AI) has a significant impact on people’s daily lives, often
exacerbating inequalities and perpetuating inequality. Its influence can be seen in a
variety of aspects, such as determining visa approval, assessing bank loan applications,
selecting job candidates, awarding student scholarships, and allocating social subsidies
to low-income individuals.
This influence is particularly worrying due to the fact that AI systems rely on
incomplete or erroneous data, which can lead to discriminatory outcomes. Recognition
of these biases has led 193 countries around the world to sign a Recommendation on the
Ethics of Artificial Intelligence, prepared by UNESCO in 2021. The agreed document
recognizes that while AI technologies can bring immense benefits to humanity and all
countries, they also raise significant ethical concerns. One such concern is the potential to
P.30
embed and exacerbate biases, leading to discrimination, inequality, digital divides,
exclusion, and posing threats to cultural, social and biological diversity, as well as
creating social and economic divisions.
Artificial intelligence (AI) has the potential to reproduce and perpetuate existing
inequalities and is far from the neutral and objective entity it is often perceived to be. AI
systems are not simply mathematical tools, but rather social actors that can be influenced
by discrimination and prejudice. Despite claims of objectivity, algorithms can shape
meaning and make controversial decisions.
The effectiveness of AI depends on the quality and biases present in the training
data it is exposed to. The decisions and intentions of the company behind the AI system
play a crucial role in its design. Training data acts as a basis for AI predictions and shapes
its perception of the world. However, simplifications made by machine learning systems
can lead to serious inconsistencies.
This becomes problematic when AI systems make discriminatory classifications
and labels that directly impact people’s lives, reinforcing prejudices and stereotypes. It is
clear that artificial intelligence is not neutral. Biased training data can lead to erroneous
and discriminatory results. For example, if an AI algorithm consistently associates certain
characteristics such as gender, social class, age or ideology with ineligibility for bank
loans, it is engaging in discrimination.
Studies have shown that AI algorithms can also influence future opportunities and
careers by selectively presenting job offers to certain people based on their educational
background. It is therefore crucial to critically examine the ethical dimension of
algorithms, including their construction, ranking methods, and biases. These rankings
P.31
always carry values, and when they perpetuate discrimination, they distort our
perception of reality.
2.2 Arbitrary decisions
In September 2016, a well-known Norwegian writer named Tom Egeland caused
a stir on social media when he shared a famous photograph from the Vietnam War on his
Facebook page. The image, taken in 1972 by AP news agency reporter Nick Ut, captures
the harrowing moment of a 9-year-old girl running naked from a napalm bombing
carried out by the US Army on her village. This powerful photograph, known as “The
Napalm Girl,” won the prestigious Pulitzer Prize and has become one of the most iconic
images of the 20th century.
Egeland’s intention in sharing this photo was to shed light on the horrors of war,
specifically the unimaginable suffering endured by innocent children. However,
Facebook’s algorithm, designed to detect and remove inappropriate content, flagged the
image due to the girl’s nudity. Egeland’s profile was consequently suspended, sparking
outrage across Nordic society. Many saw Facebook’s actions as a form of censorship, an
attempt to silence an important historical document.
The outcry against Facebook’s decision grew even more when Aftenposten, one of
Norway’s most widely read newspapers, decided to stand in solidarity with Egeland and
published the same photograph on its own profile. Within hours, the newspaper received
an email from Facebook demanding the removal of the image. This prompted
Aftenposten’s editor-in-chief, Espen Egil Hansen, to address Mark Zuckerberg directly
in an open letter printed on the newspaper’s front page. In his letter, Hansen expressed
his refusal to comply with Facebook’s request to remove the photograph. He criticized
P.32
the social media giant for limiting freedom of expression rather than supporting it and
condemned its authoritarian approach.
This bold move inspired others in Norway to follow suit, sharing the image on
their own platforms, only to be met with the same demands from Facebook to remove it.
The company justified its actions by citing its policy against posting explicit content. The
incident involving the “Napalm Girl” photograph and Facebook’s response sparked an
important debate about the power and responsibility of social media platforms in
policing the content shared by their users. It also raised questions about the balance
between protecting users from harmful or offensive material and preserving the right to
freely express important historical events.
The issue in question received significant attention, leading even Norwegian
Prime Minister Erna Solberg to express her concern about it. Solberg took to her social
media account to express her disagreement with Facebook’s decision to censor
photographs like the one in question. In her post, she emphasized the significance of the
image, stating that it holds a place in universal history as it captures the heartbreaking
reality of a young girl fleeing the horrors of war.
To further emphasize her point, the Prime Minister shared the iconic photograph
along with her words. In her closing remarks, Solberg highlighted how acts such as
Facebook censorship only serve to limit freedom of expression. However, following a
massive public outcry, the company eventually rectified its stance, reversing its decision
and reinstating both the censored images and the deleted accounts of the people who had
shared the photograph. This incident serves as a clear and serious example of the
potential dangers associated with algorithms and artificial intelligence systems. Rather
than being neutral, these systems are designed with certain biases and their decisions can
P.33
have far-reaching consequences, even affecting the very foundations of democracy, as
this particular case demonstrates.
A similar incident occurred on Instagram with a promotional poster for Pedro
Almodóvar's film "Madres separadoras." The film focuses on the story of Janis and Ana,
two women who meet in a hospital room where they are about to give birth. The poster
showed a nipple with a drop of milk, which Instagram deemed "erotic or pornographic
content." Surprisingly, Instagram removed all posts featuring the poster, including those
by its own designer, Javier Jaén.
Jaén expressed her disappointment and re-shared the image, highlighting the
absurdity of the situation. She argued that Instagram was wrong to label her work as
dangerous and pornographic, as it simply depicted a natural and universal image
associated with birth. Instagram defended its decision by stating that its technology
cannot recognize context. However, this argument is insufficient to justify the ban, as the
algorithm must be trained to understand context.
Instagram eventually apologized and reinstated the posts, allowing the poster to
be shared on the platform. However, Instagram’s initial decision, similar to Facebook’s
with the Norwegian writer, had broader implications. It posed a risk to democracy and
individual freedoms, as it restricted the dissemination of art and ideas. This incident
serves as a reminder of the power and responsibility social media platforms have in
shaping public discourse and the need for them to strike a balance between regulating
content and preserving freedom of expression.
In another incident, Guardian journalist Carole Cadwalladr shed light on the
potential dangers of search engine algorithms. In her article, she recounted an experiment
in which she typed “Did the Holocaust happen?” into Google’s search engine. To her
P.34
surprise, Google automatically completed the phrase with “Did the Holocaust happen?”,
leading her to a list of online pages. The first link she found was from a neo-Nazi website
called “Stormfront,” which claimed that the Holocaust never happened.
This discovery raised concerns about the way search engines prioritize and present
information. It highlighted the need for algorithms to be more discerning and cautious to
prevent the spread of misinformation and hate speech. The Cadwalladr experiment
emphasized the potential consequences of relying solely on technology without
considering the context and implications of the information being shared. It serves as a
reminder that platforms like Google must take responsibility for curating accurate and
trustworthy content to ensure the dissemination of truthful information.
Many people tend to rely solely on the first search result they find when using a
search engine. They often hesitate to explore additional websites that might offer
different perspectives or content to compare. This behavior stems from the belief that the
top-ranked result has the most authority and expertise on the given topic. However, it is
important to recognize that the top listing is often determined by those who have paid
for priority placement, rather than being a true reflection of credibility.
Google’s algorithm, which determines how websites are ranked, is designed with
its own economic and commercial interests in mind, which may not coincide with the
best interests of users. While Google claims to operate neutrally, its prioritization of
content involves subjective decision-making. This is worrying when one considers that
most users tend to stick with the first link they find, which could lead them to unreliable
sources, such as a neo-Nazi movement. Such a hierarchy based on economic factors poses
significant risks. The potential consequences are evident when a person unquestioningly
accepts Holocaust denial thanks to Google’s ranking system. Google plays a role in
shaping the world not only by presenting it but also by actively participating in its
P.35
creation. The lack of transparency regarding the methods used to rank websites prevents
us from assessing whether Google truly serves the interests of users or is biased in favor
of its own commercial goals. The company has the ability to hide specific content that it
deems undesirable for users to see.
By prioritizing certain content, we are no longer making objective decisions. This
is worrying when you consider that most users tend to stick with the first link they find,
even if it leads to a neo-Nazi movement. The hierarchy and ranking system employed by
Google could pose serious risks if biased or misleading information is prioritized.
Unfortunately, Google does not disclose the methods behind its decisions and rankings,
making it difficult to determine whether it is truly serving users’ interests or simply
promoting its own commercial agenda. This lack of transparency allows Google to hide
content that it deems unfit for users to view.
This issue becomes even more problematic when you consider that this hidden
content often ends up being placed at the bottom of search results, where people rarely
land. The influence of AI systems goes beyond search engines and extends to browsers
and social media. These systems make decisions on our behalf, dictating what we can and
cannot share, as well as what we can and cannot view. This significantly impacts our
perception and understanding of the world, as these systems shape our reality by offering
us curated selections and hierarchies. For example, how does Netflix’s recommendation
system determine which movies or series to suggest? How does Amazon prioritize
certain books in our searches? Why do Facebook and Twitter highlight specific stories
and news on our profiles over others? These are questions that need to be answered in
order to fully understand the extent of AI’s influence on our lives.
P.36
2.3 Fingerprints
Artificial intelligence relies on data to function, data that includes the information
shared daily on the Internet. Data serves as a source of information and can be seen as a
unique digital fingerprint. Our online activities, such as Internet searches, music
preferences and movie choices, as well as our interactions on social media, reveal aspects
of our lives and privacy is no longer guaranteed.
The Internet has become the platform for our private lives and now occupies the
public domain. In today’s society, value lies in personal display and the desire to be
noticed and acknowledged. The Internet has created a culture that emphasizes visibility
and constant connectivity. This new order, driven by technology, prioritizes being seen
and sharing experiences. It has become a common belief that if an experience is not shared
and does not contribute to the global exchange of information, it loses its meaning.
The proliferation of screens allows individuals to put themselves on display and
serve as proof of their existence. The mantra of the 21st century could be summed up as
“I show, therefore I am.” Every action taken on the Internet leaves a digital footprint,
encompassing websites visited, videos or photographs viewed, created and shared,
comments made, friends contacted, searches performed, articles read and even music and
movies enjoyed. For many, digital identity is more revealing than their real-life persona,
reflecting their true interests, concerns and motivations behind specific Internet searches.
Importantly, this digital footprint is public and accessible to anyone, making it difficult
to erase or hide. It has the potential to persist indefinitely.
Our fingerprints are meticulously recorded and documented, and tech companies
act as observers and exploiters of this information. They foster and promote this visibility
because their algorithms and AI systems thrive on the traces we leave on the internet.
P.37
YouTube’s famous slogan, “record yourself” or “stream yourself,” exemplifies this
mindset.
Through sensor networks, surveillance cameras, and website cookies, tech
companies constantly monitor and learn about our driving habits, reading preferences,
web searches, hobbies, medication use, and various other aspects of our lives. Oddly
enough, we remain oblivious to how these companies use this knowledge to influence
our daily decisions, as well as those made by other companies.
The more we rely on search engines and social media platforms to satisfy our
wants and needs, the more power and influence these entities will have in our lives. Their
real strength lies in their ability to include, exclude and sort information. Their motto
seems to be “tell us everything, don’t hold back. The more you reveal, the better we can
help you. And it won’t cost you anything!” However, this notion is nothing more than a
myth. Every click we make on the web has a price. In fact, the footprints we leave on the
Internet are a source of economic profit. Behind every click lies a technology company
armed with programs, algorithms and artificial intelligence systems that calculate how
to best use and sell this information to interested parties, whether to offer products,
services or ideas.
All of this happens even without explicit permission from the user. Most people
are unaware of how their fingerprints are used and how they shape their digital profiles
and identities. Very few people realize that a simple “like” on Facebook has the potential
to hinder future job opportunities. Tech companies design artificial intelligence systems
that, based on our own data, can recommend movies or decide whether to hire us.
Facebook defines our essence, Amazon determines our desires, and Google shapes
our thoughts. These entities shape our opportunities. Our online activities are
P.38
meticulously studied by algorithms, allowing us to better understand users and offer
personalized services, products, and ideas based on their digital identities. That is why it
is often said that there is no such thing as a free Internet. Someone always bears the cost,
and in this case it is the user themselves who pays with access to their data and private
life in exchange for the supposedly “free” digital services they receive and use.
Information therefore becomes one of the most valuable assets offered by people when
they browse the online world. All users will voluntarily give up their data in exchange
for these “free” digital services.
In today’s digital age, people willingly give up their private information in
exchange for various forms of gratification. Whether it’s the desire to improve physical
well-being or to maintain constant communication with loved ones, people are often
willing to overlook the potential risks associated with sharing personal information. The
prevalence of social media platforms further exemplifies this phenomenon, as people
willingly divulge sensitive details, such as photographs, locations, and personal
information, in exchange for social validation and approval in the form of likes and
comments.
This trend is further accentuated by the staggering sales figures for smart speakers,
which reached a global total of 147 million units in 2019. However, it is worrying that a
significant portion of those who buy these devices are unaware of the extent to which
their conversations are being recorded and the purposes for which this information is
being used.
The collection of data about people’s online behavior and identity by social media
and search engines is an ongoing and significant process. Simply being connected to a
digital device allows these platforms to collect information about users, including their
P.39
preferences, identities and desires. Our online presence not only grants us access to vast
amounts of information, but also transforms us into data sources.
This understanding has been emphasized through the prevalence of free social
media and apps, where it has become clear that when something is offered for free, it
often means that we are the ones being exploited. In exchange for the services provided
by these companies, we unwittingly contribute to their profits by giving them our
attention, which can be sold to advertisers, as well as our personal data, which feeds their
algorithms. This same pattern is now being repeated with AI bots, albeit on a larger scale
and with new complexities. Even though many users are unaware of how tech companies
use their personal information, they cannot ignore this mechanism. Arguing that one
does not care about the right to privacy because one has nothing to hide is like saying
that one does not care about freedom of speech because one has nothing to say.
Companies accumulate immense amounts of knowledge about us, but
unfortunately it is not for our benefit. Mark Zuckerberg, the founder of Facebook, once
boasted that the platform would eventually know every book, movie, and song a person
had ever consumed in their life. Furthermore, Facebook’s predictive models would even
suggest which bar to visit when a person arrives in a new city. Our personal information
becomes the main source of income for technology companies, who often sell this data to
public or private entities upon request. For example, the supermarket that appeared on
my screen with its offers while I was reading the newspaper paid the search engine to
access my user and consumer profile. It is important for all people to understand the
information that technology companies possess about them, the reasons for its collection,
the authorization process, and how it is used.
The extent to which tech companies can gather knowledge about us is quite
significant, particularly through our online activities and clicks. In fact, they can even
P.40
predict our behaviors with a high degree of accuracy. For example, a study conducted on
a social network that counted 90,000 users revealed that the company’s algorithm was
able to predict people’s responses without any errors. This was achieved by analyzing
the “likes” that users gave to various web pages, images, and videos to which they were
exposed. Surprisingly, the social network’s algorithm outperformed even coworkers in
terms of response predictions, requiring only 10 “likes” to surpass its accuracy.
It took 70 likes to beat friends’ predictions, 150 to beat family members, and 300 to
beat spouses. Another example is Netflix, which stores vast amounts of information about
millions of users. This includes their preferred film genres, the series they watch, the time
of day they choose to watch, their viewing habits (such as fast-forwarding or rewinding),
the time it takes them to finish a show, and the devices they use to view. All of this data
is collected, stored, and used by the platform. Today, access to big data is highly valued
as a reliable source of information. Indeed, the economy is based on collecting data about
people’s desires in order to make informed decisions based on this information.
Other people accessing the same online newspaper may not find the promotional
offers from my local supermarket at the bottom of their screen, especially if they reside
far away. Instead, they will be exposed to different advertisements that are tailored to
their preferences based on their Google searches or information they have shared on their
social media platforms. For example, if they have collected data about Rio de Janeiro by
reading news or performing online searches, they will undoubtedly receive
advertisements related to tourism agencies, flights, hotels, and tours in this Brazilian city.
This fact is the result of a crucial mechanism employed by algorithms and artificial
intelligence systems that analyze user data, known as personalization.
Personalization in search results is vital for businesses to ensure their financial
success. The more detailed a user’s profile is, the more effective the algorithm and AI
P.41
system will be, and consequently, the more profitable the sale of that user’s profile will
be to those interested in targeting that particular type of customer. A candy manufacturer
or retailer, for example, would be willing to pay a higher price for a list of people who
have searched for the term “chocolate” on Google. This is precisely what AI and the
algorithm strive to create and sell. However, the personalization generated by algorithms
and AI systems can lead to significant social risks and problems, such as the formation of
digital bubbles.
A digital bubble refers to the realm of personalized messages and content that
users receive while browsing the Internet, which caters exclusively to their specific
interests. Consequently, individuals only encounter content that aligns with their own
perspectives, preferences, and beliefs, while content that contradicts their views is filtered
out. This phenomenon effectively isolates users within their own bubbles, where they
only interact with like-minded individuals who share similar tastes and interests. Over
time, this can result in confirmation bias, which is the tendency for people to seek out and
select information on the Internet that confirms their existing beliefs, thereby reinforcing
their own preconceived notions.
Digital bubbles create barriers that prevent the free flow of ideas and hinder the
exchange of knowledge between people who have different perspectives. These bubbles
effectively segregate people with contrasting viewpoints, and only allow those who share
the same ideas and concerns to interact within them. Consequently, users within these
bubbles tend to ignore content that does not align with their own beliefs, leading to a
reduction in their understanding and decision-making ability.
Within these isolated bubbles, individuals are exposed only to a limited range of
news and information that caters to their specific interests, thereby impoverishing their
knowledge of broader social issues. In a participatory democracy, it is crucial that citizens
P.42
stay informed about social problems, even if they have not initially expressed interest in
them. Issues such as malnutrition, poverty, illiteracy, immigration and the situation of
the disabled affect the whole of society and require the attention of all individuals.
Algorithms should not have the authority to exclude such important topics simply
because an individual showed no interest in them. In a complex world, all aspects of life
are interconnected and have an impact on our lives, even those issues that may not be
immediately relevant to our personal concerns. Therefore, it is essential for democracy to
foster individuals who are able to think beyond their own interests.
Unfortunately, the way social media sources such as Facebook or Twitter work
perpetuates this problem. The news and information people find on their profiles are
tailored to their interests, based on their online activities or the preferences of their friends
and contacts. As a result, individuals perceive and construct a biased reality that is
influenced solely by the concerns of their social media connections or by what algorithms
have determined may interest them based on their previous searches. These algorithms,
designed to keep individuals within their own digital bubbles, pose a serious threat to
society, as they undermine the principles of pluralism, diversity and coexistence of
different viewpoints that are fundamental to a healthy democracy.
According to UNESCO, a lack of transparency in AI tools leads to a lack of
understanding of the decisions it makes by individuals. It is important to note that
artificial intelligence is not impartial and can produce discriminatory or biased results.
This highlights the need for transparency and understandability in how algorithms
operate and the data on which they are trained. While ethical concerns should not impede
progress and innovation, they should foster opportunities for ethically conducted
research and innovation that align AI technologies with human rights, fundamental
freedoms, values, principles and moral and ethical considerations.
P.43
It is crucial that the development and use of these technologies be guided by sound
scientific research and ethical analysis and evaluation. Today, no one can ignore artificial
intelligence. No one can claim indifference or disinterest because they do not personally
use it. AI has already become an integral part of our lives and plays an important role in
making numerous decisions.
P.44
Chapter 3
What does artificial intelligence do?
Artificial intelligence language systems have sparked a global controversy and
sparked intense debate. One area of particular interest is AI’s groundbreaking ability to
generate stories, which initially seems very appealing. In fact, this phenomenon is already
happening around the world. A man named Tim Boucher from the United States, for
example, has astonishingly claimed to have written almost 100 books using artificial
intelligence. These books are priced between $2 and $5 and took him between 6 and 8
hours, with some even being produced in as little as 3 hours. As of 2023, Boucher claims
to have written a staggering 97 books with the help of AI. However, specialists around
the world have begun to raise doubts and pose thought-provoking questions. Yuval
Harari raises the question of whether we can imagine a world where AI creates texts,
songs, and even TV series. The implications of such a reality are beyond our
understanding. The consequences of artificial intelligence dominating and shaping
culture are uncertain and potentially profound.
British musician Nick Cave expressed outrage when an artificial intelligence
language system, ChatGPT, created a song that he found flawed. Cave argues that songs
are born out of human suffering and the intricate internal struggles that come with
creating art. As far as I know,” Cave concludes, “algorithms don’t feel or suffer.”
Journalists also point out that AI, which is unaffected by illness, uninterested in pay rises
and unconcerned with vacation time, can produce articles that would take humans hours
to write in just a few seconds.
These articles are accurate but lack the warmth and wit that human creativity
brings. In 2023, Hollywood screenwriters went on strike for the first time in response to
P.45
the perceived threat that AI could replace their work. The union, which represents 11,500
screenwriters in the North American audiovisual industry, firmly maintains that art
cannot be created by a machine. Filmmaker and screenwriter Eric Heisserer claims that
the heart and soul of storytelling would be lost if AI took over. Heisserer further protests
against the use of scripts written by union members to train artificial intelligence systems.
Among the screenwriters interviewed, only a few can imagine the idea of AI effectively
doing their job. However, the mere fact that studios and platforms are willing to explore
this possibility is distressing to them. They fear that executives will compromise
creativity for the sake of profitability.
Even though AI is capable of doing 99% of a job competently, that doesn’t mean it
can do it flawlessly. There are certain cases where that remaining 1% can make a
significant difference, such as distinguishing between simply serving a customer by
selling empanadas (a task a robot can easily perform) and providing companionship to
someone who may be feeling lonely.
It is important to recognize that AI has its limitations. AI language systems act as
conversational agents, engaging in dialogue, exchanging ideas, generating texts, offering
advice, suggesting options, making decisions and influencing our behavior. However,
can these capabilities have an impact on democracy? This is a question that experts in the
field have attempted to answer. They argue that it could pose a threat to democracies as
democracy is fundamentally based on public conversation. Democracy thrives when
people converse with each other. If AI were to dominate these conversations, democracy
as we know it would cease to exist.
What is required of the industry that creates, designs and promotes artificial
intelligence? The organization AI Now and UNESCO have presented some
recommendations in their reports:
P.46
First, AI systems need to be transparent to address bias. This includes disclosing
where and how AI systems are used and for what purpose.
Second, companies should conduct extensive testing before launching AI systems
to ensure they do not amplify errors or biases caused by faulty data.
Third, after launch, companies should monitor the use of AI systems in different
contexts and communities, and the findings should be academically rigorous and
publicly available.
Fourth, research on AI discrimination and fairness should not focus solely on
technical analysis but also consider the social implications of AI use.
Fifth, companies developing AI should hire experts from a variety of disciplines,
including social scientists, to provide a broader perspective on the impact of AI.
Finally, ethical codes are necessary to guide and oversee AI development, ensuring
best practices and outcomes.
It is clear that artificial intelligence plays a significant role in decision-making,
raising both positive and ethical concerns. The rapid pace of AI advancement makes it
difficult to fully understand its meaning and implications. Education is crucial to
addressing this issue as it can help us understand and demand greater transparency in
AI. Specifically, education can explore how AI affects schools, the challenges it poses to
teaching, and whether a new approach to education is needed in this technological age.
The goal is to create a more fair and equitable AI system, and education provides the best
opportunity for reflection and analysis.
3.1 Education
P.47
AI has created a new problem for those who need to hire future employees. HR
departments already know that candidates rely on artificial intelligence to write the
traditional cover letter. It is clear that when hiring a candidate, managers can no longer
rely on the wording of the text. Traditional evaluation methods are no longer as reliable
for reviewing applications. As increased applicants use AI to write cover letters, what
value can there be for companies to continue requiring this requirement? If someone can
artificially enhance the email they send to a hiring manager, the email becomes
meaningless. This is exactly the innovation that Google brought to life from Gmail. This
application includes the Help Me Write tool to generate emails solely from the user's
description. It is not AI that companies have had to change, but the methods used to
evaluate candidates.
So what did hiring managers do? They started thinking about what changes they
should make to the way they assess applications. Hiring processes had to change. The
goal was to find new ways of assessment that would require candidates to do what AI
couldn’t do. Which was specific and people-specific. They concluded. AI is capable of
storing, organizing, processing, sorting, and writing data, but it lacks critical thinking, it
lacks curiosity and imagination, and it is not a source of free creativity. AI only works
with the content that those who design it have been fed and trained with.
So recruiters decided to create assessments that required reflection, reasoning,
imagination, and creativity from the candidate and reflected their curiosity and concerns.
Employers reserve writing for analyzing the cultural capital and critical and creative
thinking skills of prospective employees. One female engineer had this to say about the
type of questions the company she was applying to asked her during the hiring process:
“-In my first interview as a candidate, they surprised me with a question I wasn’t
expecting. They asked me how many tennis balls fit in a city bus. They also wanted to
P.48
know how you got that number. I quickly realized that they didn’t care about the final
number of balls, which you might otherwise find on the Internet. What the company
really wanted to know was my ability to reason to get to the final number (even if the
number wasn’t correct).”
In an attempt to foster critical thinking and writing skills, one educator assigned
her students to create a summary based on a newspaper text she had provided. However,
when she began grading the papers, she noticed a discouraging trend: many students
had submitted identical summaries. It soon became clear that these uniform summaries
were the result of the use of new artificial intelligence systems.
This realization left the teacher feeling disheartened, as her intention was to assess
her students’ writing skills, but the exercise had become futile. In light of the teacher’s
situation, it is clear that the transformation brought about by artificial intelligence should
not be limited to the hiring process alone; it should also prompt a broader debate on the
impact it can have on education.
As society continues to embrace AI, educators, policymakers, and stakeholders
must collaborate to establish guidelines and best practices that ensure the effective and
ethical integration of AI systems into classrooms. Only then can we truly harness the
power of artificial intelligence to improve education while preserving its core principles.
The advent of artificial intelligence has revolutionized the hiring process, but its
impact on education is still up for debate. Marta, a dedicated language teacher at a
secondary school, wonders whether the education system should also undergo a
transformation. This dilemma comes to a head during a faculty meeting, where Marta
shares her frustrating experience with her students that day. As technology continues to
P.49
advance, educators must face the challenge of finding a balance between leveraging AI to
reap its benefits while preserving the fundamental aspects of education.
This balance is crucial to ensure that students do not merely regurgitate
information generated by AI systems but are actively engaged in the learning process.
Furthermore, it urges us to reflect on the purpose of education itself: is it solely about
acquiring knowledge or should it also cultivate critical thinking, creativity and problem-
solving skills? While some may argue that AI systems can streamline educational
processes such as grading papers, the teacher’s experience serves as a cautionary tale. It
underscores the importance of authentic assessment and the value of human evaluation
in fostering student growth and development. Education is not just about producing
uniform results but also about fostering individual talents and skills. This incident raises
important questions about the role of AI in education. While it has undoubtedly brought
numerous advances to various sectors, its integration into classrooms needs to be
carefully considered. This experience highlights the potential drawbacks of relying too
heavily on AI systems in education and the need for educators to maintain the integrity
of their teaching methods.
A dedicated and knowledgeable History teacher found himself in a thought-
provoking situation during a meeting. As the discussion unfolded, he bravely shared an
experience that had left him both baffled and intrigued. In his genuine quest to foster
critical thinking skills among his students, the professor had assigned them the task of
delving into the intricate details of World War II. However, what he received in return
was a multitude of assignments that, while impeccably accurate and descriptive, lacked
the depth of insight and originality he had expected. Perplexed by the striking similarity
of these assignments, the professor could not help but wonder if his students had resorted
to utilizing innovative artificial intelligence language systems that have become
P.50
increasingly prevalent in today’s technological landscape. Despite his best efforts to craft
a meaningful and thought-provoking assignment, the professor could not shake the
feeling that his instructions had somehow missed their intended purpose in translation,
leaving him feeling disillusioned.
The teacher and the professor share the same opinion and it is evident that the two
exercises they proposed did not meet the intended objectives. The reason behind this is
that AI language systems excel at performing these types of tasks flawlessly. These
systems have proven to be incredibly useful tools for summarizing information,
composing written papers, and answering various text-based activities. Interestingly, it
was the students who discovered the capabilities of AI before their own teachers and
quickly turned to this technology to overcome the challenges of their assignments. The
impact of language-based AI systems is not limited to a specific region as they have made
significant strides across the world by successfully passing final degree exams, getting
admitted to prestigious universities, and even completing complex doctoral theses with
excellence.
The above experiences reveal deep flaws and shortcomings in the education
system rather than attributing them solely to the limitations of AI. A parallel can be drawn
with recruitment methods in the corporate world, where the obstacle was not AI itself,
but the strict criteria imposed by companies, which required an overhaul to secure
employment. Similarly, the problem lies not with AI systems in education, but with the
instructions and guidance provided to students. To address this, there is a pressing need
for public policies, schools and teachers to explore innovative approaches to schoolwork,
examinations and teaching methodologies in general. This would mirror the evolution
seen in the recruitment process, where a re-evaluation of traditional practices became
imperative.
P.51
If we continue to prioritize memorization, copying, and imitation in education, it
should come as no surprise that AI excels at tasks such as passing college entrance exams
or flawlessly following school instructions. This highlights the urgent need for education
to reevaluate its goals, priorities, teaching methods, and assessment techniques. An
innovative American teacher tried a different approach by incorporating language-based
AI systems into his History class. He assigned his students to use AI technology to draft
a report on the history of printing. However, the students soon discovered that the
intelligent system lacked information about the origins of printing in Europe or China.
The teacher took advantage of this drawback of AI to start a discussion about the
limitations of relying solely on AI for accurate and complete data. He emphasized the
importance of not blindly accepting the results or answers provided by an app, as they
may be incomplete or even false. Additionally, this class provided an opportunity to
explore how biases and omissions within AI systems can reinforce prejudices and
stereotypes.
In light of these developments, traditional methods for assessing students’ writing
skills, such as summarizing instructions or commenting on books or newspaper texts,
will no longer be effective. It will become increasingly difficult to discern whether a text
was written by a student or by an artificial intelligence system. Therefore, as the history
of printing classes demonstrates, educators must devise alternative assessment strategies
that can counter this dilemma.
To improve students’ understanding of artificial intelligence (AI), several school
projects have been proposed that involve students evaluating AI systems themselves.
These projects aim to analyze how AI works, identify biases or omissions in its responses,
and explore the design of the algorithms used.
P.52
In one particular project, a teacher instructed his students to use ChatGPT, a
language-based AI system, to compile arguments supporting the establishment of a
factory in a residential area. The students were then tasked with evaluating the
effectiveness of the arguments provided by the AI system and determining whether these
arguments could convincingly influence the residents of the neighborhood. To conclude
the assignment, the students were required to submit an essay that included their
criticisms of the AI system as well as their opinions on the proposed arguments.
By encouraging students to evaluate the responses and outcomes of AI systems,
this teacher’s activity not only allowed for an analysis of the system’s design and function,
but also emphasized the importance of understanding its limitations. It is crucial to note
that these projects are not intended to wage war on AI or restrict the use of new language
systems. Rather, they highlight the need to take responsibility for our own shortcomings
rather than trying to slow down technological advancements, as it is akin to trying to
block out the sun with our bare hands.
The challenge is not to ban AI, but to surpass its capabilities. We must thoroughly
analyze and evaluate AI, focusing also on what it cannot do. Traditional tasks, such as
requesting summaries or simple feedback from students, will become obsolete, as AI can
easily and accurately complete these tasks. However, this does not mean that specific
content should be discarded in education, as it is important to have a certain amount of
knowledge. While knowing only the date of World War II or the countries involved
should not be the only goal of school assignments. While information is crucial to
understanding historical events and the present, it should not be the end goal.
Education must go beyond mere information and foster critical thinking about the
implications, consequences and changes resulting from these developments. While AI
can answer questions about when, where and who, education delves deeper into why,
P.53
what it means and the resulting effects. Consequently, there is a need to reconsider how
we assess students. The need to transform current exam formats is not only due to the
emergence of AI; it simply highlights the expiration date of traditional exams.
Teachers, like Rebeca Wang, an AI specialist in the United States, have had to adapt
their grading methods to accommodate students’ use of AI platforms and applications.
This adaptation has often required rapid adjustments in the middle of ongoing courses.
It’s not about robots taking over our jobs, but about us having become accustomed to
performing tasks that robots can do for the past 5,000 years. The same applies to
education, where we must avoid having students perform robotic work. We must
encourage critical thinking, imagination, curiosity, and creativity all of which are
distinct from artificial intelligence.
As mentioned above, if education continues to prioritize memory and specific
questions, artificial intelligence will continue to excel in solving exams and assignments.
This is the current reality, where students simply copy the first answer they find on a
search engine. However, education must go beyond traditional approaches and
emphasize critical thinking and creativity, both in the academic field and in the
professional world.
A study conducted in 18 countries on different continents has revealed alarming
statistics. In almost 20 participating countries, only a mere 2 percent of high school
students possess the ability to differentiate the relevance of information found on the
Internet. Even South Korea, which has the highest percentage of teenagers with this
reflective ability, only reaches 5 percent. This competence is essential for a knowledge-
based society, but it is seriously lacking among students.
P.54
A survey conducted in Spain among high school students found that more than
half of teenagers admitted to not knowing how to search for information efficiently on
Google. They also had difficulty identifying reliable information on the web. Similarly,
research conducted by Stanford University among 7,800 high school students in the
United States revealed that a staggering 82 percent of teenagers cannot distinguish
between informative content and sponsored content. They perceive no difference
between a newspaper news story and a corporate-sponsored article written by the
president of the bank. The research concludes that American high school students lack
the skills to differentiate sources on the Internet, making it difficult for them to discern
between advertisements, sponsored articles, and news while browsing the web.
In Argentina, a study involving 2,000 teenagers from across the country found that
only 2 out of 10 students compare different websites to determine their reliability. Only
3 percent choose a website because it belongs to a recognized institution. In other words,
barely 5 percent analyze the source of the information, either by comparing it with other
websites or by verifying the existence of its author. For more than 90 percent of students,
credibility criteria are extremely poor. They trust sources based on factors such as
familiarity, usefulness, well-written content, statistics, or simply because it appears first
on Google. Surprisingly, 2 out of 10 teenagers even admit that they are not sure they can
trust a source, but they still use it. Thus, 20 percent of students use websites and their
content without even considering the authority or reliability of the information provided.
The arguments are equally limited when they have to explain why they would not
trust certain information. In their own words, they say: because the text has many
spelling mistakes”, “because I detect a serious error in what it says”, “because it does not
argue well” or “because there are many opinions”. Finally, when they are asked what
information must be included for them to believe it, they respond with definitions that
P.55
are also limited and difficult to justify. They explain that “it must be well written”, “it
must have good arguments”, “it must contain a lot of information” or “when you write
it, the question answers exactly what you are looking for”.
Adolescents around the world face significant challenges in differentiating
between relevant and reliable information on the Internet. Only a small number of them
possess the ability to establish criteria for determining reliability, while the majority
struggle to articulate valid reasons for considering certain information credible. There are
arguments that suggest that young people have always had limitations when seeking
information. These limitations include difficulties in asking questions, comparing data,
evaluating the source of information, establishing criteria for credibility and forming
well-reasoned opinions.
It is important to recognize that technology cannot be blamed solely for the lack of
critical thinking skills among its users. These limitations existed long before the Internet
came into existence. However, even though technology is not the direct cause of this
problem, it has exposed and exacerbated poor critical attitude among teenagers and even
adults. The question now arises as to why this concern has become a global problem.
How are these current limitations different from those that existed in the 20th century?
The answer to this question is multifaceted. While the Internet is not solely responsible
for this problem, it has certainly intensified it.
Let’s dig into the reasons. The Internet has made access to an endless amount of
information incredibly easy. While this accessibility is undoubtedly beneficial,
international studies reveal that this abundance of knowledge can actually complicate the
search for answers. Constant exposure to information can also hamper decision-making
processes. The abundance of information and its easy accessibility, combined with the
speed at which it circulates, poses a risk of “infoxication” an overwhelming amount of
P.56
information and noise that can lead to personal and collective confusion. Information
saturation is not a new phenomenon. In the first century, Seneca expressed concern about
the distraction caused by an excess of books. These concerns were further amplified by
the advent of the printing press during the Renaissance. Soon after its popularity rose,
concerns arose about publishers rushing to print titles without considering their quality.
However, never before have we experienced the rapid circulation of information like we
experience today. We are constantly bombarded with data, pseudo-data, rumors and
gossip that are passed off as valid information.
Consequently, although the tendency to approach texts with a limited critical
mindset is not a recent phenomenon and has been present before the advent of the
Internet, the inundation of adolescents and adults with endless information exacerbates
the state of confusion and presents significant challenges in discerning the meaning of a
text, determining the appropriate content, determining the identity of the author,
deciphering his or her interests and intentions, discerning and comparing various points
of view, and formulating one's own perspective based on credible and reliable
information. This issue, without a doubt, continues to intensify in today's society.
With the advancement of technology and the Internet, the challenges surrounding
the dissemination of information have become more pronounced. In the past, tackling a
topic involved consulting a limited number of authoritative sources. In the 21st century,
however, we are faced with an overwhelming amount of information available on the
web, with varying levels of reliability and credibility. This is further complicated by the
influence of artificial intelligence (AI), which has the ability to process and store large
amounts of data.
While AI language systems can provide answers, they are also susceptible to
receiving false information, leading to potentially incorrect results. The creators of these
P.57
systems have even expressed concerns about AI being used to spread misinformation.
Therefore, it is unwise to rely solely on AI systems for accurate information. The
abundance of information also poses a challenge in decision-making. We often believe
that more data will lead to better decisions, but there comes a point when an overload of
information actually hampers our ability to make sound decisions. This is because we
often confuse the available information with the relevant information. Ironically, AI,
which was supposed to simplify decision-making, seems to make it more complicated.
Critical thinking and creativity are uniquely human skills that should be
prioritized, while AI systems can help gather and organize information. However, to use
AI effectively, we must learn to ask the right questions and communicate effectively with
these systems. Unfortunately, traditional education tends to prioritize memorization and
regurgitation of answers, rather than fostering questioning and critical thinking skills.
The ability to ask questions is not only crucial for critical thinking but is also a
fundamental requirement for navigating the Internet. It is important for schools to teach
students how to ask more advanced questions that go beyond the typical who, what,
where, and when. These higher-level questions are those that begin with why, what
implications do they have, what changes do they bring, and what consequences do they
generate. These types of questions demand from students a variety of skills, such as
reflection, curiosity, inquiry, analysis, inference, anticipation, argumentation,
communication, collaboration, evaluation, imagination, creativity, and participation.
They require students to use multiple sources to find answers, analyze different
arguments and perspectives, seek out opposing viewpoints, question their own ideas,
and continue to ask new questions.
Questions also foster teamwork, collaboration, and communication skills.
Preparing for the digital world goes beyond knowing how to use a computer; it involves
P.58
knowing how to ask the right questions. The power of a simple question can drive
progress and change the course of history, just like Albert Einstein’s daring question
about traveling on a beam of light. Asking questions is the fundamental skill needed to
navigate the Internet effectively, as it allows people to play an active role in seeking
knowledge, rather than simply memorizing information. Teaching the skill of asking
questions is crucial for both teachers and students to think critically and use artificial
intelligence effectively.
3.2 Artificial intelligence in the digital age
In today’s world, knowing what to ask AI is critical to achieving the desired
outcomes. However, it is also important to understand how AI works and its impact, both
ethical and unethical. Promoting digital literacy and studying AI as a topic allows people
to analyze how algorithms are built and evaluate their influence on daily life. This
knowledge allows people to demand transparency and ethical design in AI systems.
Understanding the presence of bias in technology is not simply a luxury, but an
essential requirement. It is imperative that we delve deeper into the questions
surrounding artificial intelligence and its implications. In the field of education, these
inquiries have become topics of analysis, research, and debate within the classroom.
Some of the fundamental questions we seek to address include: What knowledge does an
artificial intelligence system possess about us as individuals? How does it use this
information and to whom does it disclose it?
We also need to consider the extent of the information that tech companies possess
about us and the decisions they make based on this data. Who gives these tech companies
permission to use the personal information we share online? In addition, we need to
explore the influence of tech companies on our decision-making processes. How do they
P.59
shape our choices and direct our actions? Beyond that, we should examine the intentions
and motives that drive tech companies in developing AI systems. How are these
algorithms built? Do they adhere to predetermined ethical guidelines and rules? And
what happens when they deviate from these principles? In addition, we need to examine
how AI discerns what is relevant to each user and how it arrives at decisions. Finally, we
need to consider improving transparency in these uses of AI. Exploring these questions
allows us to gain a more complete understanding of technological bias and its
implications.
It is worth considering whether an AI system can be designed to serve people,
focusing on promoting justice and equality, rather than engaging in practices that exploit
and discriminate against people. UNESCO stresses the importance of equipping people
with adequate knowledge about AI, as this can empower them and bridge the digital
divide, reducing inequalities in access to digital technology caused by the widespread
deployment of AI systems. To achieve this, Member States should actively promote the
acquisition of fundamental skills for AI education, including foundational literacy,
numeracy, digital and coding skills, media and information literacy, critical and creative
thinking skills, collaborative teamwork, effective communication, emotional skills and a
comprehensive understanding of AI ethics.
To equip people with essential skills for the digital age, it is imperative that
education incorporates a comprehensive understanding of digital literacy that goes
beyond superficial knowledge. This entails a critical examination of artificial intelligence,
delving into its implications and effects on our daily lives. Such literacy should not only
address these issues but also analyses them in depth. It is therefore crucial to develop an
educational framework that goes beyond the common tendency to idealism and be
fascinated by artificial intelligence. Instead, this framework should encourage people to
P.60
question and challenge it, fostering the development of critical thinking skills necessary
to navigate the complexities of this rapidly evolving field.
P.61
Chapter 4
Artificial intelligence and critical thinking
Taking a passive approach, entrusting everyday decisions to artificial intelligence
without questioning or being aware of how they are made, poses significant risks. Just as
a hammer is only effective when we know how to use it correctly, the same principle
applies to all tools, including technological ones such as artificial intelligence. To
maximize the benefits of digital media in education, it is crucial that we understand how
these tools work and use them critically, rather than simply viewing them as instruments.
Simply studying books does not automatically improve our intelligence. In fact, it can
even have the opposite effect if we blindly accept everything they present or only read
materials that align with our pre-existing beliefs.
In a dynamic and highly technological society that prioritizes information,
knowledge, and communication, it is essential to possess fundamental skills such as
analysis, interpretation, evaluation, inference, anticipation, problem-solving, forming
judgments, decision-making, creativity, communication, teamwork, and active
participation. The emergence of artificial intelligence has led us to re-evaluate our
understanding of the digital world. It is crucial for us to critically evaluate technology in
order to fully utilize its potential, recognize its limitations, and consider the ethical
implications of its operations. This critical understanding will enable us to make
informed decisions about when, how, and why we should employ technology.
By developing an insightful attitude and perspective towards technology and
being more thoughtful and selective in its use, we can actively contribute to the
development of a more just society. However, it is evident that most students and society
as a whole simply view technology, including AI, as tools without questioning or
P.62
reflecting on their impact on everyday decisions. Students and adults alike often rely on
AI to address their queries or concerns, whether it is to navigate unfamiliar places, choose
a movie, solve problems through virtual assistants, or use facial recognition to unlock
their smartphones. To foster a deeper understanding of AI and its various applications,
education must transcend this naturalized perspective. Schools and public policies
should emphasize teaching students how AI works, its influence on decision-making
processes, and its role in shaping our perception of the world and the construction of
knowledge and meaning.
Core competencies are so named because they are applicable to all areas of
knowledge and are essential for navigating a constantly changing society, work
environment, education system and economy. They enable people to adapt and thrive in
a dynamic world. UNESCO emphasizes the importance of these competencies in building
knowledge in any field, particularly through critical and creative thinking. Critical
thinking involves questioning, challenging and analyzing arguments, as well as problem
solving and decision making.
Reflective and creative skills are crucial for both personal and virtual life domains.
These competencies enable people to learn from others, recognize the value of diverse
opinions, engage in constructive debates and enhance their own civic empowerment.
They are vital for democratic participation and active citizenship. Furthermore, in the
digital landscape, specific skills known as digital competences are required to navigate
and use technology effectively. These competences promote responsible and creative use
of the Internet, encompassing critical thinking, problem solving, communication and
participation.
By possessing transversal digital skills, individuals can critically evaluate the
virtual world and interact with it in a reflective and participatory manner. When it comes
P.63
to artificial intelligence, digital skills based on critical thinking enable individuals to
understand its underlying principles, its functioning, its design, and its impact on
decision-making. Overall, digital competencies based on critical thinking enable
individuals to develop a deep understanding of AI and its implications.
Artificial intelligence is far from neutral in its operations. It relies heavily on the
private information and data it receives, which inevitably shapes its actions. Furthermore,
AI tends to operate through generalizations, which can often lead to biases when it comes
to classifying and labeling various concepts or individuals. These biases, in turn, can
perpetuate existing inequalities within society.
It is crucial to recognize that AI, despite its advances, is fallible. It is a technology
created by humans and, as such, is not immune to the flaws and limitations of its creators.
We must therefore approach AI with a critical eye, recognizing its potential to both
enlighten and deceive us in our understanding of the world. In addition to its role as
observer and analyzer, AI also seeks to exert influence over our decision-making
processes.
Through its algorithms and recommendations, it strives to shape the decisions we
make. However, we must be cautious as these suggestions can be influenced by the biases
inherent in the AI system itself. One of the inherent limitations of AI lies in its ability to
convert the complexities of our complex universe into a linear and simplified order. This
reductionist approach can inadvertently overlook or oversimplify crucial nuances and
complexities that exist in our world. Consequently, the view of reality that AI presents to
us is not all-encompassing, but rather a particular and sometimes biased perspective.
The topic of teaching AI education revolves around understanding the intricate
workings of artificial intelligence and its profound influence on our daily lives. It is
P.64
crucial to delve deeper into the mechanisms that drive AI, understanding how it works
and operates. Thus, an essential aspect of this education involves learning to assess
whether the results generated by AI systems are influenced by biases or discriminatory
tendencies. We must develop the skill to determine whether AI is responsibly addressing
certain problems or inadvertently perpetuating them. Furthermore, it is imperative to
understand the importance of demanding transparency and ethics in the development of
the algorithms that power AI. By instilling ethical considerations and ensuring
transparency, we can establish a solid foundation for building AI systems, fostering a fair
and equitable technological landscape.
The ability to thoughtfully use technologies and AI, as well as effectively analyze
and navigate complex situations in the digital realm, are essential skills for individuals to
understand and interact with the increasingly screen-dominated reality of the 21st
century. These skills empower citizens to effectively address and overcome the diverse
problems and challenges of our time, make independent decisions and actively
participate in society. Without a solid foundation of digital skills and knowledge,
technology, including AI, will simply serve as tools for practical purposes, devoid of
deeper understanding or critical engagement.
4.1 Challenges
The creation that emerges from artificial intelligence is always connected to the
data, texts and images that have been used to train the system. On the other hand,
teachers and students have the ability to create freely using their imagination. This
capacity for free creativity, together with critical thinking, is fundamental for education.
In addition, curiosity, which AI lacks, is an essential aspect of learning.
P.65
Curiosity allows us to question, investigate, discover, and appreciate new things.
These qualities, such as critical thinking, ethics, empathy, collaboration, and imagination,
are integral parts of education that AI cannot replicate. The question of whether artificial
intelligence will replace teachers and eliminate the need for traditional schooling is a
common concern in educational settings. However, it is important to recognize that AI
cannot replace the essential role of teachers in promoting critical and creative thinking.
AI systems are limited to responding to instructions based on the information they
have been trained with and the data they have been fed. Even so-called “new” texts
generated by AI are actually compilations of existing information found on various
websites. Similarly, AI can create artistic paintings, but only if it has been fed images that
align with the user’s instructions. It is crucial to recognize that technology and AI should
only be used as tools to enhance and complement what makes us human: our creativity,
curiosity, hope, ethics, empathy, determination, and ability to collaborate. Education
must continue to prioritize these human qualities and use technology as a support tool
rather than a replacement for human teachers and the traditional school environment.
This is precisely the aspect that education must prioritize, an aspect that no AI
system can offer. What other unique qualities does the school possess? What are the
current educational challenges in the face of continuous technological advances?
First, it is essential to teach students to always strive to be better than machines. If
an AI system has the ability to write, it will undoubtedly do it better than us.
Alternatively, it will be artificial intelligence itself that makes decisions for us, the
consequences of which we have already witnessed.
Secondly, it is important to teach students how to collaborate effectively with
artificial intelligence. Today, there is a growing emphasis on training students as
P.66
“co-pilots.” This concept is often referred to as “synchronized intelligence,” where
people and technology work together to create a better world. AI can provide
memory and accumulate information, but it is through education that individuals
learn to analyze, select, evaluate, and convert this information into knowledge.
Third, education should prioritize teaching students to construct well-reasoned
arguments. It is essential that the arguments students present to support their
perspectives on a given topic are not based solely on an AI system. These
arguments should be supported by diverse evidence and reflect an ethical point of
view. Schools should teach students the importance of comparing sources found
on the Internet, including texts that may not agree with their personal beliefs. This
challenges students to step out of their comfort zones, listen to and value different
viewpoints, and understand that this can lead to new and improved ideas. In a
democratic society, diversity expands the possibilities of making new discoveries
and adapting to change. Education should break the digital bubbles that confine
individuals to their own beliefs, opinions, and ideas, as these bubbles only lead to
fragmentation and polarization, as observed above.
Finally, education must prioritize teaching students how to think. Although this
goal is not new, it has become increasingly urgent today. Education must go
beyond mere memorization of facts and prevent the accumulation of information
from being the primary goal of schooling. AI has a memory superior to that of
humans and can accumulate large amounts of information, often more efficiently
than people. Therefore, education must go beyond information. It is important to
critically analyze the content provided by AI, determining its completeness,
accuracy, reliability or falsity. And it is necessary to use this data as a basis for
P.67
formulating higher-level questions that promote reflection and require the
application of critical thinking skills.
In today’s digital age, it is crucial that education focuses on teaching students the
importance of active participation in the public life of their communities. The
advent of the Internet has opened up countless opportunities for people to
participate, solve problems, and contribute to society. By equipping students with
the skills and knowledge to actively participate, education can empower them to
make a difference and have a tangible impact on their environment. It is essential
that students not only have the opportunity to act, but also understand that their
actions matter and can bring about meaningful change. This aspect of education,
which emphasizes the value of participation, is something that no artificial
intelligence system can replicate or promote. Furthermore, education should also
teach students the importance of becoming content creators in this digital age.
Since the Internet provides platforms for people to share their thoughts, ideas, and
perspectives, it is crucial that education fosters the ability to make one’s voice
heard. The Internet has democratized the act of creation, allowing anyone to
become visible and their ideas to reach a wider audience.
By teaching students to be content creators, education can empower them to share
their unique perspectives, contribute to ongoing conversations, and shape the
world around them. Students who engage in digital content creation based on
their personal interests and concerns have an incredible opportunity to interact
with an unlimited number of people. Additionally, they can develop their skills in
collaborating with people they may not even know, exposing themselves to a wide
range of ideas and perspectives. This exposure to diverse information enables
them to make better-informed decisions. It is crucial that education focuses on
P.68
teaching students to critically analyze and evaluate content in the digital world, as
well as encouraging them to become innovative content producers themselves.
This is a distinct and significant challenge that the education system must face in
today’s society. It is imperative that education defines its unique value and purpose. By
accepting this challenge and re-evaluating its approach, education can blaze a trail that
artificial intelligence will never be able to replicate or surpass.
4.2 Classroom approach to AI
Make a list of situations in which a doubt, question, problem or concern you had
was resolved by a machine. Have you ever wondered how it does it? Why do you
think fewer people think about how this works?
Think about the risks posed by unethical use of AI. Have you experienced any of
them? Do you know anyone who has been affected? Can you share an example?
“It will become increasingly difficult for people to make decisions for themselves
as algorithms make decisions for us.” Analyze and discuss this concept. Do you
agree with this statement? Why? Can you give some examples? Are you
concerned?
Divide into two groups. One group should research and list what the risks of facial
recognition systems are. The other group should research and list the benefits of
this system. Present each group's results to the class. Then decide which argument
is more persuasive. Finally, we will explore how facial recognition is used around
the world. Explain and justify: Do you agree with these uses? Artificial Intelligence
- Do we need a new education? Artificial Intelligence - Do we need a new
education?
P.69
Consider how someone might say, "I know a lot about you." Can you give an
example of this personal knowledge that you have felt, lived, or experienced while
browsing the web?
Divide into two groups. One group is in charge of defining all the benefits,
facilities, contributions and advances that artificial intelligence brings to everyday
life. Another group defines the risks and concerns that AI has created around the
world. Present your argument to the entire class. What conclusion will they come
to? After analyzing the benefits and risks of AI, what do you think about AI?
Thinking and designing together the content and focus of an online campaign,
including suggestions for ethical design and the use of artificial intelligence.
Use language-based AI programs to define your position on issues of interest or
concern. Ask the system to justify its position. Create a list of arguments that the
AI will give to justify its position. Evaluate the effectiveness of those arguments.
Did it convince you? Why? Discuss and write a conclusion.
In groups, develop a discussion on a topic of interest or concern. Then, ask the
language-based AI system to generate a rebuttal to your opinion. Evaluate
whether the system is correct. Rework your argument incorporating what you
think is relevant to you from what the AI has generated.
Describe your recent internet searches. What would they say about your interests,
interests, and preferences? What would someone who knows your web searches
say about you? Do you think they define your identity?
Ask an AI system for advice on a topic that concerns or interests you. Analyze the
advice you receive. Was it based on your personal information? Where do you
think it got it from? Are you satisfied with the advice you received? Why?
P.70
Confirmation bias is the tendency of people to search and select only information
on the Internet that confirms what they already think. Please think about it. What
are the risks of people only choosing content that supports their ideas? Do you
think this mechanism makes it easier to believe fake news? Think about it:
Information about a topic on the Internet. When you search for ideas or opinions,
do you read ideas that do not match your own? Why?
If you use social media, check your profile to make sure you only receive
information that matches your interests and opinions. If so, what do you think is
the reason? How can I reverse this?
4.3 Restart
In conclusion, as artificial intelligence continues to advance and integrate into our
lives, it is crucial that we recognize the need for a new education. This education should
prioritize the development of skills that AI cannot replicate, such as critical thinking,
creativity, empathy, and ethical decision-making. By doing so, we can ensure that people
are prepared to navigate the complexities of an AI-driven world and make informed,
responsible decisions. While it is essential to recognize the presence of artificial
intelligence in our lives, education cannot afford to ignore its influence.
AI has already started making decisions that impact our lives, shape our behaviors
and influence our perspectives on the world. Therefore, it becomes imperative for
education to adapt and respond to this technological advancement in order to equip
people with the necessary skills to navigate a future where AI is an omnipresent presence.
In the face of an increasingly complex, dynamic and ever-changing reality, we have
emphasized the importance of acquiring the necessary skills. While it is essential to
become proficient in using digital media tools such as Word, Excel and search engines, it
P.71
is equally crucial to develop critical thinking skills, a creative mindset and an
understanding of technology and the internet. It is vital that we learn to reflect, think and
prioritize as these fundamental skills cannot be acquired at a later stage if they are
neglected initially.
In this extensive discussion on artificial intelligence, we have delved into its
meaning and scope. We have thoroughly examined the ethical and non-ethical aspects of
its operation and explored its impact on our daily lives as well as its influence on our
decision-making process. In addition, we have also discussed the challenges that artificial
intelligence poses for education. The school system has an indisputable responsibility to
promote these skills, which are beyond the capabilities of artificial intelligence. Education
must rise to the enormous challenge of focusing on areas that AI will never be able to
fully address. This includes strengthening reflective and creative skills, fostering critical
thinking, encouraging imagination, cultivating curiosity, promoting effective teamwork,
fostering empathy, inculcating ethical values, fostering creativity, fostering
communication, and encouraging active participation.
Artificial Intelligence (AI) plays an important role in the world we live in,
especially when it comes to education. However, its incorporation should not be limited
to being just a tool. Rather, it should be studied, analyzed, and debated. Reflection and
questioning are necessary to understand its social and human impact, which is so
immense that it requires a guiding philosophy. We need to consider what we should do
with these technologies and how they will affect us. These questions go beyond mere
technical answers and touch on values and visions of what is good.
Education must priorities digital literacy, not only for students but also for
teachers. It should empower them to identify, understand and respond to new problems
arising in the digital world, including those presented by artificial intelligence. The
P.72
question arises whether AI can replace schools, teachers or teaching, but the answer
remains consistent: no, as long as education does not rely on outdated methodologies and
simplified approaches.
Education must provide added value, avoiding an excessive emphasis on
memorization, copying and accumulation. It should not aim to compete with AI
capabilities, but rather analyze, debate and complement them. The digital divide is
defined by the capacities that individuals have or lack to identify, confront and respond
to new problems and questions that arise from the use of the Internet. It is not simply a
matter of using the tool, but of understanding it. Education faces the challenge of thinking
about technologies rather than simply using them instrumentally. As UNESCO
emphasizes, digital literacy empowers students in all areas of life and helps them achieve
personal, social, occupational and educational goals. It is a basic right in our digital world
and promotes the social inclusion of all nations.
Education should therefore focus on developing digital citizens who understand
how the digital environment works and the principles that govern it. They need to
analyze the place and role of technologies in society, assess their impact on daily life and
understand how they contribute to the construction of knowledge. These citizens should
also know how to use these technologies effectively for participation. They should have
the ability to navigate complex digital contexts and understand their implications in
various aspects of life, such as social, economic, political, educational and work-related.
P.73
Conclusions
In the modern digital age, people are willing to give up their personal data in
exchange for various forms of gratification. Whether it is to improve physical health or
maintain regular contact with loved ones, people often want to ignore the potential risks
of revealing personal information. The popularity of social media platforms is another
example of this phenomenon, where people willingly reveal sensitive data such as
photographs, locations, and personal information in exchange for social recognition and
acceptance in the form of likes and comments.
This trend is further reinforced by the staggering sales of smart speakers, which
totalled 147 million units in 2019. What is worrying, however, is the fact that a significant
number of buyers of these devices are unaware of the extent to which their conversations
are recorded and for what this information is used. The collection of data on people’s
online behavior and personality through social media and search engines is an important
and ongoing process. A simple connection to a digital device allows these platforms to
gather information about users, including their interests, personality and desires.
Access to unbridled AI not only gives access to a wealth of information but also
makes it a source of data, however, there is still a long way to go for ethics in use. This
perception has been highlighted by the rise of social media and free apps, where it has
become clear that when something is offered for free, it often means that we are being
taken advantage of. In exchange for the services these companies provide, we unwittingly
contribute to their profits by giving them our attention, which can be sold to advertisers,
and data to our individuals, which feeds their algorithms. The same pattern is now being
repeated with AI bots, albeit on a larger scale and with new complexity. Although many
users do not clearly understand how tech companies use their personal data, they cannot
ignore this mechanism. It is important to recognize that technology cannot be blamed
P.74
solely for the lack of critical thinking skills among its users. These limitations existed long
before the Internet existed. However, even though technology is not the direct cause of
this problem, it has exposed and exacerbated the poor critical attitude among teenagers
and even adults.
The question now remains as to why this concern has become a global problem.
How are these current limitations different from those that existed in the 20th century?
The answer to this question is multifaceted; while unsupervised access is not solely
responsible for this problem, it has certainly intensified it.
P.75
Literature
Anderson, C.W., Bell, E. & Shirky, C. (2014). Post Industrial Journalism: Adapting
to the Present. New York: Columbia University Libraries.
Arbeláez-Campillo, DF, Villasmil Espinoza, JJ, & Rojas-Bahamón, MJ (2021).
Artificial intelligence and the human condition: Opposing entities or complementary
forces? Journal of Social Sciences (Ve), XXVII(2), 502-513.
Beckett, C. (2019). New Powers, New Responsibilities: A Global Survey of
Journalism and Artificial Intelligence.
Bostrom, N. (2014). The ethics of artificial intelligence. Frankish, K. & Ramsey, M.
Cambridge University Press: The Cambridge handbook of artificial intelligence.
Carlson, M. (2015). The Robotic Reporter: Automated journalism and the
redefinition of labor, compositional forms, and journalistic authority. Digital Journalism,
3(3), 416-431.
CSR Observatory. (nd). Corporate social responsibility (CSR).
Diakopoulos, N. (2015). Algorithmic Accountability: Journalistic investigation of
computational power structures. Digital Journalism, 3(3), 398-415.
Firat, F. (2019). Robotjournalism. The International Encyclopedia of Journalism
Studies, 1-5.
Fromm, E. (2003). Ethics and psychoanalysis. Fondo de Cultura Económica.
Fukuyama, F. (2002). The end of man: consequences of the biotechnological
revolution. Madrid: Zeta.
P.76
Garrafa, V. (2009). Epistemology of bioethics, Latin American approach.
Colombian Journal of Bioethics, 4(1), 277-296.
Glahn, H. (1970). Computer worded forecasts. Bulletin of the American
Meteorological Society, 51(12), 1126-1132.
Graefe, A. (2016). Guide to Automated Journalism. Columbia University Libraries.
Hottois, G. (1991). The bioethical paradigm. An ethics for technoscience.
Barcelona: Anthropos.
Hume, D. (1748). An inquiry concerning human understanding
Kaku, M. (2011). The physics of the future. Bogotá: Debate.
Kaku, M. (2014). The future of our mind. Bogotá: Debate.
Kurzweil, R. (1992). The age of intelligent machines. Cambridge, MA: MIT Press.
Kurzweil, R. (2005). The singularity is near: When humans transcend biology.
Carlos, G. Germany (trans.). Lola Books.
Lassi, A. (2022). Ethical implications of artificial intelligence. Technologies and
news production. InMediaciones de la Comunicación, 17(2), 153-169.
Linares, JE (2008). Ethics and the technological world. Mexico: FCE.
Martínez, M. (2009). The new science: Its challenge, logic and method. Trillas.
McCulloch, W.S. & Pitts, W. (1943). A logical calculus of the ideas immanent in
nervous activity. Bulletin of Mathematical Biophysics, 5, 115–133.
Morduchowicz, R. (2023). Artificial intelligence: Do we need a new education?
UNESCO: France.
P.77
Nath, R., & Sahu, V. (2020). The problem of machine ethics in artificial intelligence.
Artificial Intelligence and Society, 35(1), 103-111.
Pérez Orozco, B. and Rentería Rodríguez, M. (2018). Artificial Intelligence.
INCyTU, 12.
Ríos, S. (1976). Decision analysis. ICE Editions.
Savater, F. (1999). Ethics for Amador. Ariel
Schwartz, R., Dodge, J., Smith, N.A., & Etzioni, O. (2019). Allen Institute for AI,
Seattle, Washington, USA. Carnegie Mellon University, Pittsburgh, Pennsylvania, USA.
University of Washington, Seattle, Washington, USA.
Simon, H. (1960). The new science of management decision. Harper & Brothers.
UNESCO. (2022). Recommendation on the ethics of artificial intelligence.
UNESCO: France.
Verdegay, J.L., Lamata, M.T., Pelta, D., & Cruz, C. (2021). Artificial intelligence and
decision problems: the need for an ethical context. Suma de Negocios, 12(27), 104-114.
Villalba Gómez, JA (2016). Emerging bioethical problems of artificial intelligence.
Diversitas: Perspectives in Psychology, 12(1), 137-147.
Villasmil, J. (2020). The fragility of human civilizations. Political Issues, 37(64), 10-
14.
P.78
This edition of “Artificial Intelligence in the Ethics of Education” was
completed in the city of Colonia del Sacramento in September 2024.
P.79