Some time ago I was thinking about writing a paper on how the AI community had thoroughly misunderstood what it meant to be intelligent, so their efforts were necessarily in vain. With characteristic restraint, I called it ‘AI is a lost cause’. I have yet to complete it, and I suspect that intelligent life on this planet will have evolved to a higher (and indisuputably non-computable) plane before I do.
However, one of the striking results of my background research was discovering what Turing had actually said about what he thought he was doing. For example, described his own model as dealing with ‘problems which can be solved by human clerical labour, working to fixed rules, and without understanding’ (quoted in Copeland 2002). He also described electronic computers as ‘intended to carry out any definite rule of thumb process which could have been done by a human operator working in a disciplined but unintelligent manner’ (in Copeland 2002, emphasis added. See Hodges 1992: 484 and passim for other examples).
So far is this from intelligence that it is almost a specification for Searle’s ‘Chinese room’ – the very antithesis of intelligent activity.
I leave the reader to ponder the question of whether Turing’s descriptions are even slightly compatible with pursuing human (or other kinds of natural) intelligence in computational terms. I think the answer is pretty clear. So is the value of Jerry Fodor’s interesting/ludicrous claim that ‘Pretty much everything about the cognitive mind that we have learned in the last fifty years or so was taught us either by Chomsky or Turing’ (in the TLS on 13 September 2002).
If you would like to see the existing draft of ‘AI is a lost cause’ – which is perhaps 90% ready – please click here.