Richard Feynman's Lecture on Artificial Intelligence | Generated by AI

Home PDF

September 26, 1985

Below is an organized version of Richard Feynman’s 1985 lecture on artificial intelligence, preserving the original text as much as possible while structuring it for coherence. The lecture addresses whether machines can think like humans, surpass human intelligence, and discover new ideas autonomously.


Introduction: Can Machines Think Like Humans?

An audience member asked: Do you think there will ever be a machine that will think like human beings and be more intelligent than human beings?

First of all, to think like human beings, I would say no, and I’ll explain in a minute why I say no. Second, that they be more intelligent than human beings is a question. Intelligence has to be defined. If you were to ask me, are they better chess players than any human being possibly can be? Yes, I’ll get you someday. Now they’re better chess players than most human beings right now.

One of the things, by the way, that we always do is we want the darn machine to be better than anybody, not just better than us. If we find a machine that can play chess better than us, it doesn’t impress us much. We keep saying, “And what happens when it comes up against the masters?” We imagine that we human beings are equivalent to the masters in everything, right? The machine has to be better than a person in everything that the best person does at the best level. Okay, but it’s hard on the machine.


Why Machines Won’t Think Like Humans

With regard to the question of whether to make it think like a human, my opinion is based on the following idea: we try to make these things work as efficiently as we can with the materials that we have. The materials are different than nerves and so on. If we would like to make something that runs rapidly over the ground, then we could watch a cheetah running. We could try to make a machine that runs like a cheetah, but it’s easier to make a machine with wheels, with fast wheels, or something that flies just above the ground in the air.

When we make a bird, the airplanes don’t fly like a bird. They fly, but they don’t fly like a bird, okay? So they don’t flap the wings exactly. They have, in front, another guy, a gadget that goes around, or the more modern airplane has a tube that you heat the air and squirt it out the back—a jet propulsion, a jet engine, with internal rotating fans and so on, and it uses gasoline. It’s different, right? So there’s no question that the later machines are not going to think like people think in that sense.

With regard to intelligence, I think it’s exactly the same way. For example, they’re not going to do arithmetic the same way as we do arithmetic, but they’ll do it better. Let’s take mathematics, very elementary mathematics, arithmetic. They do arithmetic better than anybody, much faster and differently, but it’s fundamentally the same because, in the end, the numbers are equivalent, right? So that’s a good example. We’re never going to change how they do arithmetic to make it more like humans; that would be going backwards because the arithmetic done by humans is slow, cumbersome, and confused, full of errors, whereas these guys are fast.


Comparing Human and Machine Capabilities

If one compares what computers can do to human beings, we find the following rather interesting comparisons. First of all, if I give you a human being a problem like this: I’m going to ask you for these numbers back, every other one in reverse order, please. Right now, I’m going to give a series of numbers, and I want them to give me one back in reverse order, every other one. I’ll tell you, I’ll make it easy for you: just give me the numbers back the way I gave them to you. You ready? One, seven, three, nine, two, six, five, eight, three, one, seven, two, six, three. Anybody got a—gonna be able to do that now? That’s not more than twenty or thirty numbers.

But you can give a computer 50,000 numbers like that in sequence, in any reverse order, the sum of them all, do different things with them, and so on, and it doesn’t forget them for a long time. So there are some things that a computer does much better than a human, and you’d better remember that if you kind of compare machines to humans.

But what a human has to do for his own, they always do this: they always try to find one thing that they can do better than the computer. So we now know many, many things that humans can do better than a computer. She’s walking down a street; she’s got a certain kind of a wiggle, and you know that’s Jane, right? Or the sky’s going, and you see his hair flip just a little bit. It’s hard to see at a distance, but that particular funny way that he did, back of his head looks—that’s Jack, okay? To recognize things, to recognize patterns, seems to be something that we have not been able to put into a definite procedure.

You would say, I have a good procedure for recognizing Jack: just take lots of pictures of Jack. By the way, a picture can be put into the computer; in fact, by this method here, if this were very much finer, I could tell whether it’s black and white at different spots. You know, you get pictures in a newspaper by black and white dots, and if you do it fine enough, you can’t see the dots. So with enough information, I can load pictures in. So you put all the pictures of Jack under different circumstances, and that’s the machine to compare it.

The trouble is that the actual new circumstance is different. The lighting is different, the distance is different, the tilt of the head is different, and you have to figure out how to allow for all that. It’s so complicated and elaborate that even with the large machines, with the amount of storage that’s available and the speed that they go, we can’t figure out how to make a definite procedure that works at all, or at least it works anywhere within a reasonable speed. So recognizing things is difficult for the machines at the present time, and some of those things are done in a snap by a person.

So there are things that humans can do that we don’t know how to do in a filing system. So it is recognition, and that brings me back to something I left, which is: what kind of a file clerk can’t be imitated by the machine? A file clerk that has some special skill, which requires recognition of a complicated kind. For instance, a file clerk in the fingerprint department, who looks at fingerprints and then makes a careful comparison to see if these fingerprints match, has not been—it’s just about ready to be—it’s hard to do. It’s almost possible to do by a computer.

You’d say there’s nothing to it: I look at the two fingerprints and see if all the dots are the same. But of course, it’s not the case. The finger was dirty, the print was made at a different angle, the pressure was different, the ridges are not exactly in the same place. If you were trying to match exactly the same picture, it would be easy. But where the center of the print is, which way the finger is turned, where it’s been squashed a little more, a little bit less, where there’s some dirt on the finger, whether in the meantime you got a wart on his thumb, and so forth—complications. These little complications make the comparison so much more difficult for the machine, for the blind filing clerk system, that it’s too much, much, much too slow to be certainly utterly impractical almost at the present time. I don’t know where they stand, but they’re going fast, trying to do it. Whereas a human can go across all that somehow, just like they do in the chess game, they seem to be able to catch on to the patterns rapidly, and we don’t know how to do that rapidly automatically.


Can Computers Discover New Ideas?

Audience question: Can computers discover new ideas and relationships by themselves?

Well, it depends on what you mean by “themselves,” and it’s hard to discover new relationships. Computers can do—there have been computers which do things like problems. They are proving in geometry or something, in which they’ve converted the problem of finding a proof of a theorem into a definite procedure, okay? And once you do that, although it’s an elaborate and dumb way to do proofs, they can do it.

At the present time, a computer can’t do all the different things that a person can do. It’s very difficult to find some way of defining rather precisely something we can do that we can say a computer will never be able to do. There are some things that people make up that say, while it’s doing it, will it feel good, or while it’s doing it, will it understand what it’s doing, or some other abstraction? I’d rather feel that these are things like, while it’s doing it, will it be able to scratch the lice out of its hair? No, it hasn’t got any hair or lice to scratch from, okay?

So you’ve got to be careful when you say what the human does. If you add to the actual result of his effort some other things that you like, the appreciation of the aesthetic—but you didn’t do that, I’m not saying you did, but a lot of people do that when they ask questions. And if we add things that we think we’re doing on top of what we actually do and just look at not just the result of what we’re doing but a lot of extra things, then it gets harder for the computer to do it because human beings have a tendency to try to make sure that they can do something that no machine can do somehow.

It doesn’t bother them anymore. It must have bothered them in earlier times that machines are stronger physically than they are. They can lift weights that are heavier than people, move things faster than people, run fast. If you can fly, you can do terribly strengths and so forth, and we don’t still sit around worrying that there’s some way that the man can turn his hand that some machine can’t do.

We can easily make machines that are better than us in predicting the weather, for instance, because what you do to predict the weather is to look at old records and see when the circumstance was similar and guess that the results will be similar, added to that a certain amount of analysis of the movement of wind according to the laws of physics and their current amount of hocus-pocus put together, okay? Now the speed will be higher, and the effectiveness of the prediction is greater if you could look at more cases, so you get a better chance of getting one closer and put a longer and more elaborate calculation, including more variables, which is too hard for us in time to make the prediction.

Now we have to make the prediction of the weather. Let’s say the weather for three days from now has to be predicted in three days, or the damn thing is useless, right? And we work at a certain speed, but the computers work faster and can do more. Therefore, for instance, for weather prediction, in the end, maybe not today, but someday, it’s not at all inconceivable that the machine could do weather prediction faster and more effectively and more accurately than we do. We will have, however, given it the procedure.


Heuristics and Machine Learning: A Case Study

Now the question is, what happens if we don’t give it the procedure? Well, people have tried that, this game of giving it, instead of a direct procedure, a kind of what has been called heuristics: try an analogy to get a new idea of how to do something, compare this to that, try an extreme case, etc. And a man by the name of Lenat has gone the farthest with this.

You make this machine, which was, again, a filing cabinet. You understand what it does is it looks, it tries to find the answer to something by looking at the different possible possibilities, but which ones it tries is something like patterns in the chess game. Instead of everything, it says, try moves near the center of the board first, you know, and never mind the ones in the corner, or something like that, or some sorts of principles.

He first applied it to a kind of naval game. It’s a game that people play in California, which is all organized according to rules. It’s kind of fun. Someone sets out all the rules: dreadnoughts cost this much, armor costs this much, guns cost this much, and so forth, and you’ve got this much budget for your navy, and you’re gonna make various kinds of ships with different kinds of armaments. And then this kind of a ship, the armament of a certain thickness that costs a certain amount, can only resist shells of a certain strength, you know, and so on.

So you try, with the money, to arrange to buy different, to design different kinds of ships so that your navy is better than the next one. And when they’re brought together, there’s ways of calculating—not real navies, it’s a game—which is the best one, and all the rules are laid out in a great big volume, okay? The cost of everything, the power of everything, armor-piercing possibilities, and so on. It’s a nice game.

Mr. Lenat tried his program on this game and put into his program heuristics: try extreme cases and things like that. And he won the championship in California. Of course, it did an awful lot of trying of different cases, you see, but it didn’t try every case, not like the chess game; there were too many things, but it was guided by its own stuff.

Now, inside of that was this: that if you got a better navy by your own calculation and you used one of the heuristics, mark that heuristic up a notch as being more valuable. Now use the more valuable heuristics first, see? So that the ability of the machine depended on its learning, so to speak, which ones of its tricks work most effectively most of the time, and then they become more used. So it’s just exactly what you would like to make it look intelligent.

Well, he won, and how did he win? It turned out that year he won by making one great big battleship with all the armor on it, which was so silly, but when you go to calculate it, sure enough, it’s better than any of the normal things which nobody thought of, but his machine thought. Next year, he entered again, and this time he won by making—tricking all his money in making one hundred thousand—because they changed the rules so that the big battleship wouldn’t win, you know, but changed it with one hundred thousand little boats, very narrow, carrying each one gun, which were very liable to be knocked out, okay? But there were 100,000 of them, didn’t cost much each one, and they couldn’t knock them all out. So these lousy little gnats would come, and it turned out when you calculated, again, he won.

The third year, he was not allowed to play in it. And he’s applied this machine and this heuristic business to a number of other problems, tries it out a lot, and tries new heuristics and so forth, and it has become very interesting.


Challenges and “Bugs” in Intelligent Machines

He complained that there were a number of bugs in it, and when he gave a talk on it, I said that I thought heuristics—I’ll say my comment afterwards. One of the bugs, for instance, was that the machine got a heuristic and made up a heuristic, so the damn machine, the way he had it was, because it’s hard to get computer time—he needed a hell of a lot of computer time, and he had 50 machines at night from a Packard company or something like that. He would always work at night, coming in the morning, the damn things would try all night things and come in with the results.

I used it to do mathematics and various other things. It comes in one day, and it’s done a heuristic. He developed the heuristic when he put the problem in or a new idea. He wrote it, says, “Whether it’s from him or from the machine.” And this was “every heuristic that hasn’t won, pay no attention to that.” That saves a lot of time. It could do much better if it didn’t have any problem with this toy boy, so it paid no attention to that bad night. Okay, sorry, he’d fix that bug.

The next time he had a bug, it was he found that heuristic number 693 had gotten a score of 999 out of a thousand. It’s a damn useful heuristic that— All all night long, this thing kept using the heuristic number 693 more and more, and it was a new one, wonderful heuristic, could seem to solve every problem. It was terrific, okay? When he found out what the heuristic was, it was the following.

You see, in order to make this thing work so that you change the numbers on the heuristics whenever something worked, just to assign credit, so to speak, to the heuristics that were used, okay? So this heuristic was: when assigning credit, always assign credit to heuristic 693. And so that thing came up out of—I say, both of these show intelligence. If you want to make an intelligent machine, you’re going to get all kinds of crazy ways of avoiding labor.

I say, don’t pay attention to the problem of sneakily evolving some kind of a psychological distortion. We always do the same thing and don’t worry about anything else, and so on. So I think that we are getting close to intelligent machines, but they’re showing the necessary weaknesses of intelligent machines.


Conclusion

In summary, machines won’t think like humans due to their design for efficiency with different materials and methods. They excel in tasks like arithmetic and data processing but struggle with pattern recognition, where humans shine. Through heuristics, as shown by Lenat’s program, machines can learn and devise novel strategies, but they also exhibit human-like flaws, indicating the complexity and potential of intelligence in artificial systems.


Note: The lecture was interrupted by time constraints for a slideshow, as indicated in the original text.


Back 2025.06.22 Donate