In this blog I share my thoughts about artificial intelligence as a field and its methods, as well as usability and design issues. More articles are available in the German version.
What is AI?
Part 2: Strong, weak and no AI
13.01.2019
Following up on the question what destinguishes AI from non-AI we look at the more fine-grained distinction of strong vs. weak AI.
This blog post is part of a series. Read the previous parts here:
Part 1: The original definition

We have seen how AI started with an intuitive notion of reproducing or simulating intelligence in the form of a computer program. The open question was how general the methods have to be to deserve the label AI as opposed to standard engineering.

The idea of intelligent machines has been taken up by philosophers to discuss whether this goal is possible or desirable. John Searle considered AI to be impossible, because a computer can never be "conscious", illustrating the idea in his famous Chinese Room thought experiment. He argued that outward intelligent behavior (which he assumed wrongly could be implemented with simple rules) is distinct from consciousness. I have always wondered why anyone bothered to respond to such a foolish argument. But they did and the solution was a split of AI into imaginary subfields:
The claim that machines can be conscious is called the strong AI claim; the weak AI position makes no such claim. [1]

Psychologists and neuroscientists avoid the term consciousness since it contradicts the generally accepted view that human behavior stems from physical, albeit very complicated, processes. But if we just follow fixed mechanical rules, is there anything such as free will? And we all feel something like consciousness, where does it come from? Current scientific methods cannot answer these questions, and therefore, they are usually considered to be outside the scope of science. This also implies that strong AI is not worthy as a scientific endeavour as it involves the notion of consciousness.

Coming back to the original endeavour of understanding and reproducing intelligent behavior with computers, to make philosophers happy, we call it weak AI and ignore the consciousness debate. But this is not the end, unfortunately. Some people seem to equal intelligence with consciousness, and consequently no consciousness implies no (general) intelligence (in exact contradiction of Searle's argument). In this way, the original idea of AI got thrown into the strong AI box, which had already been labeled as unscientific.

What remains is the weaker version of weak AI, also called narrow AI: solving small, well defined problems. This field has been celebrated as having made tremendous achievements in the last decades. Apart from the little detail that these achievements had nothing to do with the understanding of intelligence (they are mostly due to the increase in computing power and infrastructure), how are they different from any other type of computational engineering? They are not, and they should not be called AI.

The narrowing of AI down to standard engineering is correlated with the dilemma of funding. Specific applications with well-defined goals are more justifiable, graspable, and achievable in the short time horizons of third-party funded research projects, and therefore the easier path for most scientists. AI in the sense of general mechanisms that are applicable to a wide range of problems, is so hard that we have not even found the right research questions. In an ideal world this would be seen as a challenge. But in the world we live in researchers are forced to tackle small, simple problems to be able to publish at the prescribed pace. Instead of producing, you would have to sit back and think, very likely thinking in the wrong direction most of the time.

In my opinion, the distinction of strong and weak AI has damaged the field and contributed to the confusion of what we want to call AI. The celebrated successes all fall into the domain of narrow AI (the weakened version of weak AI), while the scenarios of thinking robots that take over the world (for good or bad) are based on very optimistic (not to say unrealistic) expectations of general AI (the general version of weak AI).

  1. Stuart Russell, Peter Norvig. Artificial Intelligence — A Modern Approach., 1st th edition. Prentice-Hall. 1995. pp. 29
What is AI?
Part 1: The original definition
04.01.2019
A fundamental problem in the public discussion is the missing definition of Artificial Intelligence. In this blog series I provide possible destinctions of AI from non-AI. Whatever anybody believes AI to be, one should make the definition clear before starting any discussion on the impacts of AI.

I usually try to avoid any definition of Artificial Intelligence. There is no well-agreed definition of human intelligence, therefore it seems pretentious to find one for the artificial variant. I have changed my mind, however, since AI is discussed in so many different variants—often in the same text. The usual story goes ``We already have AI or are very close to having it, therefore AI development will accelerate and destroy/save humanity.'' The problem here is that the definitions of AI change from phrase to phrase, but since the same word is used, they are mingled into one.

Let's go back to the time when the term "Artificial Intelligence" was coined, to 1955. Computers were still a rarity, being a treasure of any university that could afford one. People might just have been happy to have a tool for doing fast computations. But the idea of thinking machines is much older than computers, and with the now available remarkable power of doing arithmetic, old dreams were rekindled. In 1955 John McCarthy organized a workshop titled "Dartmouth Summer Research Project on Artificial Intelligence" and thereby coined the term. The proposal [1] starts as follows

We propose that a 2-month, 10-man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.
This paragraph shows the definitional problem of AI from its very beginnings: the generality of the methods. If a machine is expected to learn, did they mean that it can learn anything in any circumstance or would this learning refer to one specific problem (in this case learning could simply mean storing new values in a database)? Solving problems reserved for humans, do we have to find one method that solves them all or are we happy with a tailored solution for each problem?

Basic idea of AI: the ability to expand a simple specification of a task to a complicated program

The zigzaged area in this picture represents a tough problem that requires human intelligence. One interpretation of AI could be to write a computer program that solves the complicated task equally well as people do. This would be a typical engineering task, so why invent a new name for it? I think what McCarthy and his colleagues had in mind was the process shown in the middle. Instead of handcoding the complete task, we would like to specify some aspects of the task (we somehow have to tell the machine what we want from it), and this simple specification would expand by AI mechanisms into the more complicated program. There is still a lot of room for interpretation: how big or small is the central specification, how much is added by "intelligent" mechanisms, how much is so specific to the task that it has to be implemented manually?

The picture also illustrates two possible ways to develop AI: from inside out or outside in. Alan Newell and Herbert Simon followed the first approach. They started early with the Logic Theorist and later the General Problem Solver to develop general mechanisms that solve a wide range of problems (not necessarily any problem). Even at the Dartmouth workshop it became clear that the outside-in method would simplify funding [2]. Working on specific problems, one might gradually identify commonalities and thereby extract more general mechanisms. I think both approaches are important and should interact.

But do we know now what differentiates AI from non-AI? The AI arrows represent what any software library does: extending a simple specification of a frequently occurring problem class into a more complex solution.

I will not be able to give you any real definition of AI, but I will provide several perspectives from which you can create your own understanding (or perplexity) of what AI is.

  1. McCarthy, J., Minsky, M., Rochester, N., Shannon, C.E.. A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence. 1955.
A journalist understanding AI
02.12.2018
Is the AI hype coming to an end? At least some journalists such as Esther Paniagua start to realize that AI is not radically changing the world.

Esther Paniagua recently published this wonderful article (in Spanish) where she gives a realistic image on the state of AI. She had taken the trouble to interview several AI experts, including myself, to really understand where we are and how slow progress still is. I am delighted to read such realistic accounts of AI as a counterbalance to the apocalyptic or utopian futures that are often predicted.