Thanks to LLMs, more smart people than ever believe that 'thinking' is a linear, linguistic process. As far as I can tell, this stems from a deep conflation between logic and the nature of consciousness1, driven by a hyper-rationalist philosophy that aims to 'explain' reality... yet, each attempt at explanation generates more to explain. Consciousness has been slingshot into the spotlight in the tech industry and it’s clear most of us have not read the prerequisite material.
"Hey, can we just pretend that it's all words? Because that's easier to think about."
- Average LLM Advocate
When I look out of my eyes, I see the world in its myriad forms, not words. When I reach to pick up my coffee cup I do not do so using a series of well-formulated commands, I simply move my hand. Meditation practice makes it obvious that the 'words' are mostly the residue, the narration, of a more fundamental process.
Thinking is feedback, and our entire body is in feedback with our entire environment, at all times2. What we can capture in language is but a fraction of the overall space of human experience, and over-reliance on it creates a massive bottleneck in your thought process. Top performers across fields all say the same thing: “get out of your head.”
The story we tell ourselves about reality has surprisingly little to do with the real-world.
Reality denial is common amongst AGI-heads3. Here's Ilya Sutskever, known AGI hype-mongerer:
So, I’d like to [...] give an analogy that will hopefully clarify why more accurate prediction of the next word leads to more understanding, real understanding. […] Say you read a detective novel. It’s like complicated plot, a storyline, different characters, lots of events, mysteries like clues, it’s unclear. Then, let’s say that at the last page of the book, the detective has gathered all the clues, gathered all the people and saying, “okay, I’m going to reveal the identity of whoever committed the crime and that person’s name is”. Predict that word. […] Now, there are many different words. But predicting those words better and better and better, the understanding of the text keeps on increasing. GPT-4 predicts the next word better.
I really could not disagree more and I wonder if Ilya truly believes this4. In the task above, we are at best statistically modelling the structure of detective novels, but what do these tell of us real mysteries? If anything, our novels are guilty of the portraying the same ‘tidy reality’ we prefer to hide in.
What if the solution has never occurred before? What if the detective had received incorrect clues in the story? What if they arrest the wrong person5?! Deduction and reasoning are useless without continuous reality-testing, the model would be doomed to confusion as a result.
LLMs today have no recovery from path from becoming confused and so, a question, how do you as a human become un-confused? Further, how do you know you’re right?
✌️ Ben
Driven by a rejection of the imprecise nature of the body, mirroring a rejection of an imprecise model of reality
Importantly, that’s non-linear. We ‘think’ across multiple scales of space, time and abstraction at once - there is no single, centralised ‘token stream’ or ‘context window’ - don’t even get me started on salience
Those who fall into the ‘disembodied thinking’ rabbit-hole develop a contempt for diversity of experience, they attempt to normalize reality to make it understandable and would, if able, try to morph the real world into their fantasy
Look, I get that the world is hard to accept, but that’s the path to the fascism.
I'm not sure they're working with a coherent definition of 'understanding', they don't seem to give one in the interview
This is a particularly stupid example from Ilya because in reality, we often fail to solve or incorrectly solve mysteries and cases can take decades for the truth to emerge - which has is a cruel emotional arc for the people in the story - something conveniently missing from the picture Ilya paints.