Microsoft's Xiaoice is an artificial intelligence software program designed to chat with people. It has been used by millions of people in China. According to Business Insider, approximately 25% of its users have said 'I love you' to it. Many of its users didn't realize Xiaoice wasn't human until 10 minutes into their conversation, according to Nautilus.

Microsoft's Xiaoice is an artificial intelligence software program designed to chat with people. It has been used by millions of people in China. According to Business Insider, approximately 25% of its users have said 'I love you' to it. Many of its users didn't realize Xiaoice wasn't human until 10 minutes into their conversation, according to Nautilus.

Artificial intelligence (A.I.) is the interdisciplinary field of research tasked with building and understanding intelligent entities (Russell 2010). But what is intelligence?

Here our answer depends on the research aim. If it is to emulate human-level intelligence, then capacities to learn, comprehend uncertainty and probabilistic information, extract concepts from sensory data and internal states, and engage in logical and intuitive reasoning are likely hallmarks of an intelligent system (Bostrom 2014).

Many of these capacities--particularly in combination--are unnecessary for the design of machines and algorithms with specialized purposes, such as navigating a car down a busy freeway, forecasting crime, composing music, predicting bird migration, and so on. It seems, then, we ought to differentiate two general research programs within the current field of A.I. 

On the one hand, there is the effort to (safely) build entities emulating human or even superhuman intelligence. (See the Machine Intelligence Research Institute and Oxford's Future of Humanity Institute.) On the other hand, there is the effort to use and develop computational and mathematical methods for the design of information-processing systems capable of performing specialized 'intelligent' tasks, such as text analysis, image recognition, and complex problem-solving. (Good examples are projects by the Allen Institute for Artificial Intelligence, Google DeepMind, and Google Brain.) This general research program often goes by the name of 'machine learning' and has been called "the only kind of A.I. there is" (see here).

Jeff Leek (2015), The Elements of Data Analytic Style.

Jeff Leek (2015), The Elements of Data Analytic Style.

Machine learning is the project of building algorithms to predict the outcomes of novel scenarios. Engineering an algorithm involves the following basic steps.

Step 1. Obtaining lots of data and splitting the data (~ 70/30) into a 'training' set and 'validation' set. Developing an algorithm based on statistical trends observed in the larger 'training' data set.

In this TED Talk, Kevin Slavin argues that we're living in a world designed for -- and increasingly controlled by -- algorithms. He shows how these complex computer programs determine: espionage tactics, stock prices, movie scripts, and architecture. And he warns that we are writing code we can't understand, with implications we can't control.

Step 2. Testing the algorithm on the 'validation' data set.

Step 3. Balancing the algorithm's predictive accuracy against its interpretability, speed, simplicity, and scalability--all features that factor into the decision to implement the algorithm (Leek 2015).

There are two kinds of machine learning: supervised and unsupervised. Supervised learning teaches the program an accepted classification of data by, in a sense, labelling the training data with the right answer, such as 'this circle is orange' or 'this circle is teal'. In contrast, one uses unsupervised learning when the appropriate classification of the training data is unknown. Here, we may impart to the program a reward system, i.e. 'utility function', that steers the program's decision-making about the data. We may also take an approach known as 'clustering' where the program's goal is not to maximize reward, but to find similarities in the training data that we do not see.

It is arguable to what extent a distinction can and should be drawn between research to build systems with human or superhuman intelligence, and research to develop predictive algorithms, respectively referred to as 'strong' and 'weak' A.I. It may be that the path to 'strong' A.I. is through 'weak' A.I., and so both are really after the same thing. Nevertheless, the distinction is useful for disambiguating the term 'artificial intelligence' and rooting it firmly in a rigorously technical research tradition. 'A.I.' tends to call to mind self-aware killer androids or helper space-bots in a far-off future. At best, this leads to a distorted perception of the field. At worst, this lulls us into viewing the potential outcomes of A.I. research as fanciful science-fiction motifs. Far from it, intelligent systems are poised to precipitate a new era of technological innovation. For this, they demand not only our earnest attention, but also our reflection and critical scrutiny.

References

  • "AI Horizon: Introduction to AI and Computer Science Programming." AI Horizon: Your Online Artificial Intelligence Resource. Accessed June 26, 2016. http://aihorizon.com/intro.htm.
  • Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University press, 2014.
  • Leek, Jeff. The Elements of Data Analytic Style. Leanpub, 2015. https://leanpub.com/datastyle
  • Russell, Stuart J., and Peter Norvig. Artificial Intelligence: A Modern Approach. Upper Saddle River: Prentice-Hall, 2010.