– May 24, 2018 –
Computer Engineering master’s student from Escola Politécnica da Universidade de São Paulo answers questions about Artificial Intelligence.
What are the major benefits of using AI in contemporary society?
If we look up in the dictionary for the definition of the word “intelligence” we will see that one of the definitions is “the ability to understand, and solve problems, or conflicts, and adapt to new situations.” The ability to reason and adapt is what makes an individual more able to stand out (or even survive) in environments where there is any type of competition.
For companies, the use of artificial intelligence in the creation of new products and services will allow costumers’ problems or needs to be solved in more personalized ways, according to the situations of each moment. This “smart” adaptation capability is what will make certain products and services more competitive than their competitors’ ones, making them stay longer on the market, and generating more profit for those who created them. Therefore, for businesses, without a doubt, the most significant benefit is the achievement of more profit due to the higher competition capacity conferred by the use of artificial intelligence.
For society, the benefits will come in the form of greater amenities, simplifications, and cost reductions, but I believe the most significant gains will be those that can be obtained when artificial intelligence surpasses humans cognitive abilities when performing complex tasks, as IBM’s Watson computer already does by processing a gigantic amount of data to get diagnoses of different forms of cancer and recommend more appropriate treatments for them.
But Yuval Harari, the author of the book “Sapiens: A Brief History of Mankind,” has warned that behind the technological advancement wonders lies the threat of possibly creating highly unequal societies.
If computers begin to overcome human beings not only in cognitive terms but also in physical terms, why would companies hire people? If computers provide better diagnoses than doctors, and robots can perform extremely delicate surgeries, why would we need so many doctors?
Medicine is one of the professions that require many cognitive abilities. So, if artificial intelligence can pose a threat to doctors’ jobs, what about simple jobs? They will be extinct, without a doubt.
What rules should be adopted or established so that AI does not threaten thousands of jobs?
The creation of any new technology will always threaten jobs because companies will ever need to be more competitive; otherwise, they will cease to exist. What rules should have been adopted so that the steam engine and production mechanization would not threaten thousands of jobs in the 18th century? You see, after two hundred years, even though the technologies have changed, this question remains practically the same.
The use of artificial intelligence will undoubtedly extinguish several types of jobs, in addition to those currently being extinguished. According to Boston Consulting Group, by 2025 a quarter of the job posts that now exist will be replaced by software or robots, and according to a study from the University of Oxford, 35% of existing jobs are at the risk of being automated.
Examples of human replacement by “smart” machines are well known: the automotive company Tesla, which has a highly automated factory, produces not only cars but also trucks, with autonomous navigation capabilities; the company Uber is replacing its drivers with autonomous vehicles; Amazon is investing in drone delivery services; Boston Dynamics builds robots that might be used by armies; recent significant achievements in space exploration have been reached thanks to advances in the creation and use of unmanned space exploration vehicles such as NASA’s Spirit and Opportunity.
It is not possible to stop technological development because it is necessary for companies’ survival. The threat does not come from the use of the steam engine or artificial intelligence, but from the purpose to be achieved with the use of these technologies, which is the need to make more profit; otherwise, companies will cease to exist.
Companies need to generate profits to continue existing and, to exist, they must be more competitive and, by being more competitive, they must also stimulate their products consumption increase. The higher the consumption, the higher the profits; the higher the competition, the higher the need to invest in innovations, and the higher the need to stimulate consumption again. It is a vicious circle that might result in society collapse as we know it.
The jobs threat problem is related to the world’s hegemonic economic system, which is capitalism. Paradoxically, although the capitalist system, with competition and the free market, stimulates the creation of better products and services, spurs economic growth and increase not only productivity but prosperity, it also promotes higher social inequalities, higher finite resources consumption, unemployment, and economic instability.
The discussion about what threatens jobs in the future is much more complex and should not be limited to the use of artificial intelligence alone.
So how is it possible to use AI in a responsible and controlled way?
When the Wright brothers and Santos Dumont invented the airplane, they did not aim at fighter planes production. When Enrico Fermi discovered that a massive amount of energy could be released by bombarding nuclei of atoms with neutrons, he did not aim at the atomic bomb creation.
In my opinion, the use of any technology or any knowledge for a particular purpose must be related to ethics, which is the set of values and principles that guide the conduct of people in society.
In November 2017, the Global Network of Internet and Society Research Centers, of which CEST is a member, organized an event in Rio de Janeiro to discuss how to use AI in a responsible way for promoting social inclusion. Several researchers from major universities such as Harvard, MIT, and Universidade de São Paulo, as well as representatives from companies such as Microsoft, IBM and Google attended the event. Therefore, many researchers and practitioners in the world are concerned with the use of AI to promote social justice, increase access to health systems, improve education, and so on. These same researchers and practitioners are also concerned about the risks of adopting AI, such as the future of work, the emergence of new power structures, the increase of social inequalities involving rural communities, women, youth, ethical or racial groups, among others.
I believe that the path to responsibly use AI is one that will be created not only by engineers, computer scientists and mathematicians. The responsible use of AI will arise from the joint work of interdisciplinary teams that will consider ethical and sociological aspects besides the technical, economic and financial issues.
What are the biggest challenges for the scholars in this field?
I do not belong to the group of scholars that are exclusively dedicated to the artificial intelligence field , but, talking to professionals from this field, I noticed that there are two important challenges ahead. The first concerns the bias removal from the algorithms that are being used in artificial intelligence systems, and the second one concerns these types of systems’ autonomous communication and decision-making capabilities when they begin to interact with each other more frequently.
An algorithm is a sequence of instructions that are used by a machine to perform specific tasks for a particular purpose. Artificial intelligence algorithms can be created by engineers, computer scientists, mathematicians, physicists, sociologists, or any other professionals who know how to create an algorithm. It turns out that, during the creation of an algorithm, part of the personality of its creators can be incorporated into that set of instructions even if unintentionally. Therefore, a concern that exists is to understand how it is possible to create algorithms that do not have the biases of their programmers; this is particularly important when an artificial intelligence algorithm is evaluating, for example, how to allocate ethnic groups in a given region, or how to grant credit to a person, or how to properly select a candidate for a particular job. What is not desired is that the algorithms make arbitrary decisions, but this is very difficult to tackle because researchers cannot tell precisely how an autonomous machine make certain choices. That is, the challenge is to prevent artificial intelligence systems from being black boxes in which we do not know precisely how they work or why they make certain decisions.
In 2016, it became famous on the internet the case of a bot that was developed to chat in social networks, but that ended up losing control. After some time interacting autonomously with social networks users, the system began to utter racist, homophobic and conservative speeches. It is difficult to predict how some “smart” systems can behave, so it is still necessary to supervise and follow the development and evolution of such systems.
Regarding the second challenge, imagine a situation in which different artificial intelligence systems that are entirely autonomous have to make a joint decision. If it is not known exactly how these systems make their choices and, for example, they belong to the armed forces, we could run the risk of triggering a war because of a wrong decision made by those systems; this is a real concern and not science fiction. Should autonomous cars be programmed to kill or to save lives? It is interesting to note that autonomous cars need to be programmed to deal with ethical decisions that, sometimes, not even human beings know how to proceed. In the event of an accident, should the car protect its passengers at all costs or minimize the number of casualties, even if it has to sacrifice its occupants? Now, imagine tens or hundreds of artificial intelligence systems making joint decisions in a completely autonomous way, to control the financial market, air traffic, missiles, satellites, and so on, and having to deal with hundreds of ethical issues at the same time; not to mention that ethical issues vary from country to country. Imagine the complexity required to control this autonomous system of systems.
What threats do you believe AI can bring to human life?
Artificial intelligence is just a tool. The choice to make a tool that brings benefits or hazards is in the hands of those who intend to use it. What I hope is that the new generation of professionals be committed to making good ethical choices so that an excellent tool be used only for good and not the other way around.