Forty Years of Software Evolution

The Illusion of Artificial Intelligence

Since Aristotle end even before, many philosophers had the idea that human logic could be reduced to a formal, mathematical system.

A lot of work on this topic was done at the beginning of the 20th century: Boole introduces symbolic logic, then we have the work of Frege, on formal logic, then Russel and Whitehead, trying to reduce even mathematics to logic, and, finally, Gödel stated that some assumption can't be logically demonstrated; but the idea than human thought is basically formal logic was very strong among researchers.

With the development of computers many scientist began to believe that the human intelligence could be reproduced and surpassed by an artificial intelligence of the computers.

This idea, in a mixture of hope and fear, is still alive, giving life to a lot of science fiction, films, novels, books and academic studies.

Artificial intelligence is an illusion: the way humans and computers think is inherently different. Computers use a logic based on exact comparisons and mathematics; humans proceeds by loose similarities, influenced by many different informations, whose correlation is often ill-defined. This is the base of intuition, and can't be formalized; logic is only a part of the human intelligence, used in situation in which intuition fails, most time for a lack of practical experience.

Moreover, even a single neuron is alive and able to adapt to changes and different situations; really different from a programmable electronic circuit.

Fuzzy logic and neural networks, as attempts to simulate neuron interactions, are very interesting, showing us that the base of thinking can be something mechanical, but brain and computers remain two completely different things.

Due to this difference humans are more able to learn, adapt and evolve, computers are fast in data management and calculations, and they must be used for what they do best.

With the increasing power of computers, more and more complex tasks can be done today by computer-based equipments, tasks once only for humans; but a computer is a computer, a brain a brain, and computers must be programmed following their own logic schemes, not to mimic humans, that is a pointless attempt. This is also true for neural networks, very effective tools for some specific problems, in which an analytical treatment is unfeasible, but they are only an adaptive way to find a solutor for a difficult mathematical problem.

But these where not the leading ideas among scientists and, Artificial Intelligence (AI) studies, where feeded, in the sixties, by huge funding in US and England, and by Japan government in the eighties. Proud AI scientists promised all, but there where not enough results and the funding stopped after some years and many millions of dollars.

Mainly at US universities, as Stanford or MIT, a lot of work was done in the field of artificial intelligence, developing methods and ideas that drove the subsequent development of computer languages.

Computer languages where classified following an evolutionary-like scheme:

Prolog and Lisp where widely used for artificial intelligence studies. Lisp was developed at MIT by John McCarthy in 1958. The MIT AI laboratory was very active in the sixties and seventies, and two spinoff from MIT produced in the seventies the Lisp machines: personal workstations with hardware support for Lisp, programmed in Lisp and with an operating system written in Lisp. Only some thousands of Lisp machines where produced; then their business was killed by the RISC workstations of the eighties, faster, cheaper and with a Lisp compiler.

The emphasis on artificial intelligence and logic, in spite of the overall failure, produced a lot of ideas and methods that influenced computer science in the following years. Along this line where developed languages as Smalltalk, Eiffel, Haskell, Scheme, Simula and many, many others. It was a fundamental moment in the language and computer evolution; but nothing of this reached me, and I missed all this. I produced, in those years, only old style FORTRAN programs for IBM 3090 and CDC mainframes. I saw also COBOL, Assembler, Basic, but nothing else. I always wonder how that could happen, but Italy was at margin in the computer technology; most universities had a very theoretical approach to computing, interested in algorithms and theory, not in real programming, and the computer market was driven by the big vendors, often with privileged relationsship with government institutions, whose management was unable to understand what they where buying. We have to remember that there was no internet in the seventies; the information spread was slow, following rigid paths, it was difficult even to find where to search for a given topic. Specific technical knowledge was a prerogative of limited environments. Internet changed all, but this is an'other story.

This text is released under the "Creative Commons" license. Licenza Creative Commons