Should we be scared of intelligence? This is a simple question for me with a two-letter answer even simpler: no.
However, not everyone agrees–many people, including the late physicist Stephen Hawking, have expressed concerns that the rise of powerful AI systems could bring humanity to an end. Clearly, your view of whether AI will take over the world depends on whether you believe that it can develop intelligent behavior beyond that of human beings–something called ” super intelligence. ” Let’s see how likely this is and why there is a great deal of concern about the future of AI.
People tend to be frightened about what they don’t understand. Racism, homophobia, and other sources of discrimination are often attributed to fear. So it is not surprising that it also applies to new technologies–they often have a certain mystery. Some technological achievements appear almost unrealistic, clearly exceeding expectations and human performance in some cases. But let’s demystify the most popular AI techniques, commonly referred to as “machine learning.”
This allows a machine to learn a task without specific instructions. This may sound spooky, but the truth is that some statistics are rather mundane. The machine, which is a program or an algorithm, is designed to detect relationships within the data provided. There are many methods that allow us to do this.
For example, we can present images of handwritten letters (a-z) to the machine one by one and ask them to tell us which letters we show in sequence each time.
We have already provided the possible answers–only (a-z) can be provided. The machine says a letter at the beginning, and we correct it by giving the correct answer. We have also programmed the machine to reconfigure itself so that it is more likely to give us the correct answer for the next letter next time it is submitted with the same letter. As a result, the machine improves its performance over time and “learns” to recognize the alphabet.
Essentially, we have programmed the machine to use common data relationships to accomplish the specific task. For example, all “a” versions look structurally similar but different from “b,” and this can be exploited by the algorithm. Interestingly, after the training phase, the machine can apply the knowledge obtained to new letter samples, e.g. written by a person whose handwriting has never been seen before. We give answers to AI.
Human beings, however, are well read. Perhaps a more interesting example is the artificial Go player of Google Deepmind, who has outperformed every human player in their performance of the game. It clearly learns in a way that is different from humans–playing with itself a number of games that no human being can play during his lifetime. It was specifically instructed to win and told that the measures taken determine whether or not it wins.
The rules of the game have also been said. By playing the game again and again, you can discover the best action in every situation–inventing movements that nobody has ever played.
Toddlers versus robots Does this make the AI
Go more intelligent than a human player? Of course not. AI is very specialized in specific tasks and does not show the versatility that humans do. Over the years, people have developed an understanding of the world that no AI has achieved or seems likely to achieve soon. The fact that AI is referred to as “intelligent” lies in the fact that it can learn.
But it’s no match for people even when it comes to learning. In fact, young children can learn by watching someone solve a problem once.
An AI, on the other hand, needs tons of data and loads of attempts to succeed in dealing with very specific problems and it is difficult to generalize its knowledge about tasks that are very different from those trained. Thus, while people develop amazing intelligence rapidly in the first few years of life, the key concepts behind machine learning do not differ so much from those of a decade or two.
Toddler’s brains are awesome. Modern AI’s success is less due to the breakthrough in new techniques and more due to the large amount of available data and computer power. However, even an infinite amount of data will not give AI human intelligence–we must first make substantial progress in the development of artificial “general intelligence” techniques.
Some approaches to this involve the creation of a computer model of the human brain, which we are not even near to. In the end, it’s not really because an AI can learn that it suddenly learns all aspects of human intelligence and outsmarts us. There is no simple definition of what human intelligence is, and we certainly know little about how intelligence exactly emerges in the brain. But even if we were able to work out it and then create an AI that could learn to become smarter, it doesn’t necessarily mean that it would be better. I am more concerned, personally, about how people use AI.
Machine learning algorithms are often referred to as black boxes and there is less effort in identifying the specifics of the solution found by our algorithms. This is an important and often neglected aspect, because we often have an obsession with performance and less understanding. It is important to understand the solutions discovered by these systems, because we can also assess whether they are correct or desirable solutions.
For example, if we wrongly train our system, we can also end up with a machine that has learned relationships that don’t generally exist. For example, say that we want to design a machine to evaluate the ability of potential engineering students. Probably an awful idea, but let’s do it for the sake of the argument.
This is traditionally a male-dominated discipline, which means that training samples from previous male students are likely to come from. If, for example, we do not ensure that the training data is balanced, the machine could conclude that engineering students are male and misapply it to future decisions.
Machine education and man-made intelligence are tools. Like everything else, they can be used in a wrong or right way. It is the way they are used that matters to us, not to the methods themselves. Human greed and misunderstanding frighten me much more than artificial intelligence.