Years back, Douglas Hofstadter (of Godel, Escher, Bach fame) worked with his research assistants at Indiana University, training a program to solve jumble puzzles like a human. By jumble puzzles I mean the game where you’re given a set of letters and asked to arrange those letters into all of the possible word combinations that you can find. This is trivially easy for even a primitive computer – the computer just tries every possible combination of letters and then compares them to a dictionary database to find matches. This is a very efficient way to go about doing things; indeed, it’s so efficient as to be inhuman. Hofstadter’s group instead tried to train a program to solve jumbles the way a human might, with trial and error. The program was vastly slower than the typical digital way, and sometimes did not find potential matches. And these flaws made it more human than the other way, not less. The same challenge presents itself to the AI maximalists out there: the more you boast of immensely efficient and accurate results, the more you’re describing an engineered solution, not one that’s similar to human thought. We know an immense amount about the silicon chips that we produce in big fabs; we know preciously little about our brains. This should make us far more cautious about what we think we know about AI.