Control of Combinatorial Explosion

One of the things I appreciate deeply about impactful school communities is how we treat knowledge and knowledge structures. Roger C. Schank once wrote that, “[k]nowing what particular knowledge structure we are in while processing can help us determine how much we want to know about a given event; that is, contexts help narrow the inference process” (AI Magazine 8.4). Continuing to focus on how we, as humans, engage in the intelligent process that we call inference, he says “[m]any possible ways exist to control the combinatorics of the inference process: deciding among them and implementing them is [serious].” (63)

As we continue to ponder the relevance of “outdated school structures” [a term bandied about with some frequency in pseudo-educational opining as well as genuine educational research] in the age of Artificial Intelligence (AI), we would do well to consider Schank’s message. The citation in the previous paragraph was taken from his article “What is AI, Anyway?”, an article he penned in…wait for it…1987. In this must-read work for anyone interested in contemplating the future of AI, Schank talks about what ‘real’ AI is/would be. Later in the article, he states that “[a] program is not an AI program because it uses some form of logic or if-then rules. Expert systems are only AI programs if they attack some AI issue.” He goes on to identify some ten enduring issues (also termed “problems” in the philosophical sense) that would have to be attacked by a [software] programme in order for us to consider it, truly, to be AI. One of those ten items is “control of combinatorial explosion,” and it has a direct relation to the human process of inference.

He proffers that, “[o]nce you allow a program to make assumptions beyond what is has been told about what may be true, the possibility that it could go on forever doing this assuming becomes quite real. At what point do you turn off your mind and decide that you have thought enough about a problem? Arbitrary limits are just that, arbitrary. […] Many possible ways exist to control the combinatorics of the inference process: deciding among them and implementing them is a serious AI problem if the combinatorial explosion is first started by an AI process.” (63)

Schank, whose article formed part of the dissemination of the findings from the Yale Artificial Intelligence Project at the time, and who continues to point out the failings of what we are calling AI in 2018 (’this isn’t AI,’ he would say), comes to what many educationalists already know, despite all the hype about how AI is going to solve humanity’s greatest challenges: “The ability to wonder why, to generate a good question about what is going on, and the ability to invent an answer, to explain what has gone on to oneself, is at the heart of intelligence. We would accept no human who failed to wonder or explain as very intelligent. In the end, we will have to judge AI programs by the same criteria.”

We’re not there yet. Decades from now, we will look back at the present and recognise that we were subject to marketing and sales hype around machine learning, rather than true AI. We are in the earliest days (even 31 years after Schank’s cited article) now, and ‘intelligence’ is exactly “why school?” remains relevant and a goal worth pursuing. We are seeking ‘the heart of intelligence.'

Remember this when you hear about the ‘next great thing’ that is ostensibly some form of AI. If this be-all and end-all of programmes is based on logic and if-then rules, it isn’t AI. It might perform a function (such as calculation) better than humans, but performing a function better than humans doesn’t enter into the definition of intelligence. After all, because we require calculators in trigonometry classes, do we consider students (or ourselves) less intelligent because of the introduction of the calculator?

"What is AI, Anyway?" Roger C. Schank. AI Magazine 8.4 (1987)

Scientific and technological goals of artificial intelligence, and a proposal of ten fundamental problems in AI research. Related to Yale Artificial Intelligence Project. 

“Because of the massive, often quite unintelligible publicity that it gets, artificial intelligence is almost completely misunderstood by individuals outside the field. Even AI’s practitioners are somewhat confused about what AI really is.” (1987)

Is AI mathematics?

Is AI software engineering?

Is AI linguistics?

Is AI psychology?

It [probably] doesn’t have just one answer. “What AI is depends heavily on the goals of the researchers involved, and any definition of AI is dependent upon the methods that are being employed in building AI models. Last, of course, it is a question of results. These issues about what AI is exist precisely because the development of AI has not yet been completed. They will disappear entirely when a machine really is the way writers of science fiction have imagined it could be.”

“Most practitioners would agree on two main goals in AI. The primary goals is to build an intelligent machine. The second goal is to find out about the nature of intelligence. Both goals have at their heart a need to define intelligence. One way to attack this problem is to attempt to list some features that we would expect an intelligent entity to have. None of these features would define intelligence, indeed a being could lack any one of them and still be considered intelligent. Nevertheless each attribute would be part of intelligence in its way. [They are} communication, internal knowledge, world knowledge, intentionality, and creativity.”

If the communication lines are narrow with a person, we might consider this person unintelligent.  No matter how smart your dog is, he can’t understand when you discuss physics, which does not mean that the dog doesn’t understand something about physics. […] In other words, the easier it is to communicate with an entity, the more intelligent it seems.  Obviously, many exceptions exist to this general feature of intelligence, for example, people who are considered intelligent who are impossible to talk to. Nevertheless, this feature of intelligence is still significant, even if it is not absolutely essential.”

“We expect intelligent entities to have some knowledge about themselves. They should know when they need something, they should know what they think about something, and they should know that they know it. At present, probably only humans can do all this ‘knowing,’ We could program computers to seem like they know what they know, but it would be hard to tell if they really did.”

“Intelligence also involves being aware of the outside world and being able to find an dutliise the information that one has about the outside world. […] Intelligent entities must have an ability to see new experiences in terms of old ones. This statement implies an ability to retrieve old experiences that would have to have been codified in such a way as to make them available in a variety of different circumstances. Entities that do not have this ability can be momentarily intelligent but not globally intelligent.”

“Goal-driven behaviour means knowing when one wants something knowing a plan to get what one wants. […] Of course sheer number of recorded plans would probably not be a terrific measure of intelligence. If it were, machines that met that criterion could easily be constructed. The real criterion with respect to plans has to do with inter-relatedness of plans and their storage in a way that is abstract enough to allow a plan constructed for situation A to be adapted and used in station B.”

“Finally, every intelligent entity is assumed to have some degree of creativity. […] It certainly means being able to adapt to changes in one’s enrolment and to be able o learn from experiences. Thus, an entity that doesn’t learn is probably not intelligent, except momentarily.”

“What is really the case is that it is not possible to clearly define which pieces of new software are AI are which are not. In actuality, AI must have an issue-related definition. In other words, people do arithmetic and so do computers. The fact is, however, that no one considers a program which calculates to be an AI program, nor would they, even if the program calculated in exactly the same way people do. The reason this is so is that calculation is not seen as a fundamental problem of intelligent behaviour and that computers are already better at calculation than people are. […] To put this argument another way, what AI is is defined not by the methodologies used in AI but by the problems attacked by these methodologies. […] A program is not an AI program because it uses some form of logic or if-then rules. Expert systems are only AI programs if they attack some AI issue. A rule-based system is not an AI program just because it uses rules or was written with an expert system shell. It is an AI program if it addressed an AI issue. [these issues change, though ]

“Some problems will endure:

  1. Representation

  2. Decoding

  3. Inference

  4. Control of Combinatorial Explosion

  5. Indexing

  6. Prediction and Recovery

  7. Dynamic Modification

  8. Generalisation

  9. Curiosity

  10. Creativity

Previous
Previous

How Well Can We Serve Gifted Students

Next
Next

The Siren Song of AI