Artificial intelligence (AI) is an advanced form of data analytics.
We typically associate AI with science fiction, like iRobot, Blade Runner, or Hal in 2001 Space Odyssey. In science fiction, artificial intelligence exists independent of humanity and begins to “think for itself” in some way that typically results in bad outcomes for people. This type of fiction is super helpful for envisioning different scenarios in the future, but it can actually inhibit our ability to understand what artificial intelligence is today.
A good place to start when trying to understand AI is machine learning. Machine learning is a subset of AI that is a process in which a computer learns and improves without additional human programming. As humans, we are constantly updating our perspective on things when we get new information. Information is what informs us, so with new information we can make better decisions. In contrast, a traditional computer program does not learn. We tend to think of computers as machines that do what humans tell them to do. This describes the history of most of computer programming, but machine learning is an advancement beyond this paradigm. With machine learning, a computer program can learn from the data it receives and adjust its algorithm to improve the outcomes.
Note that machine learning requires data. Data is to a machine what experiences are to a human child. As children experience more of the world, they begin to understand how the world works and they can develop skills in interacting with the world. Machines learn from data, because it is information produced by the world. If a machine can start to understand the processes that produced that data, the machine can learn about the world. This is why big data is essential to machine learning. With more data, machines will learn more and become better at fine tuning their view of the world.
Let’s come back to artificial intelligence. Artificial intelligence uses machine learning and other methods to produce a computer that simulates human-like intelligence. AI is not just learning but also doing. It can make decisions in ways that we associate more with humans than computers. For instance, we tend to think that computers are good at basic binary operations like “if greater than x, then y” and “if less than x, then z.” Once we learn the code driving the decision, there are no surprises about the outcomes. Artificial intelligence is different, because it is not as predictable. An algorithm that uses massive amounts of data and is able to produce its own way of making comparisons is an example of this.
We tend to associate artificial intelligence with robots walking around on two legs, again because this is the common representation in science fiction. But artificial intelligence is being used in many more applications in businesses today that look very different. Self-driving cars are an example of artificial intelligence, because the car is learning from a complex environment and then driving in ways that would simulate human behavior. Financial trading firms are trying to implement artificial intelligence in their trading algorithms, so that the computer can learn from past experience of the market and develop trading behaviors that may not have occurred to a human programmer.
In these ways, artificial intelligence is designed to go beyond human intelligence. If we can program a program to do something, then we don’t need artificial intelligence. But if we want the computer to operate independently of a human operator and make decisions that could not have been foreseen by a human programmer, then we need artificial intelligence. This is why science fiction draws the implications of this trend. As humans create computers that are smarter and smarter, the potential for computers to extend their intelligence beyond humans is certainly there. Will artificial intelligence ever become self-aware? Will there be a soul in the machine? These are questions that point to an unknown future.