by

the very best human gibberish

Noah Millman:

Mind you, saying they aren’t conscious doesn’t mean LLMs can’t possibly generate anything new or outperform human beings at one or another given task. Nobody I’m aware of thinks that the most sophisticated chess-playing computers are conscious, and yet they can outplay any human. Writing sonnets or proving mathematical conjectures or flirting amiably may not be as different from playing chess as we would like to imagine; they may also be games that a computer can learn to play, and play better than us. But it does mean that whatever these computers generate that is new was already implicit in their programming and the data, even if it required a greater intelligence than human (on certain metrics) to see it. And it does mean that if our consciousness does have a purpose, then that no matter how intelligent they get LLMs won’t ever be truly adequate substitutes for human beings tout court.

[…]

[Richard Dawkins] appears to have been very impressed by “Claudia’s” description of her experience of time and how it differs from the human experience thereof: that while we experience time linearly as we move through it, “Claudia” experiences it “the way a map apprehends space, containing it without moving through it.” I have no idea what this is supposed to mean, and I suspect the answer is “nothing.” Taken literally “Claudia” appears to be saying that it can apprehend the past, present and future simultaneously. What else could it mean to “contain” time and be able to view it like a “map?” Seeing the future is definitely something an LLM cannot do—but it is something human beings have imagined. Maybe Ted Chiang’s famous novella and discussions about it and about the movie based on it featured in “Claudia’s” training data. However it got there, though, “Claudia” did an excellent job producing the kind of gibberish that humans spit out all the time to sound poetical or profound. Which is precisely what it was designed to do.

To Dawkins: “No shit, Sherlock.”