One of the things I do in my day job is to work with technologists on AI. I have really one overarching goal with this: to figure out how to use AI for good in education.
So this means I think a lot about what’s happening with AI. And, being a philosopher at heart, I can’t help but think about it philosophically.
One of the things I’ve been wondering about is a theory called “dead internet theory.” This is the idea that the internet is mostly just the internet replicating itself with AI. There’s very little that is original in the majority of internet content.
And, with the increasing sophistication of AI tools, our ability to rely on digital content is rapidly diminishing. It is very difficult to tell if images, videos and text have been produced by humans or bots.
So where does this leave us, philosophically speaking?
To answer this, I will turn to the philosophy of mind/consciousness that you encounter in Trika Shaivism, from 11th century Kashmir.
Embodied Minds
One relevant question, I think, is about the fundamentals of human experience. What does it mean to experience something?
In European and Euro-adjacent cultures, we have been, for quite a long time, dissociating the mind from experience. I, perhaps unfairly, blame the French philosopher René Descartes for this. From his kind of philosophy emerged the idea that the mind is a place where we do abstract things like reasoning and mathematics.
In this theory, you could take the human mind to be a kind of biological algorithm trapped in a meatsuit. And this means that if you could extract its thoughts as algorithms and place them into, say a digital machine, then the quality and content of mental experience would be no different.
This, though, is a very peculiar way to think about human experience. Abhinavagupta, the great theologian-philosopher-saint of Kashmir, had other ideas.
Here, the idea is that human experience is constituted by many things, and that these things totally interpenetrate and permeate each other. I’ll give just one small example.
In this theory, you have six metaphysical principles that are beyond time and space, beyond the limits of ordinary experience. Three of them are: knowing, willing and acting. These are, in some way or another, the kinds of things that we think of as constituting the mind.
Now, you can go down a hierarchy of principles, from these three, cross the threshold into time and space and embodiment. And, in these lower principles, you have the realm of the human body and its senses.
But - against people like Descartes - these principles all interpenetrate and permeate each other. So that means a higher principle like ‘knowing’ is fully present in a lower principle like ‘ear’ or ‘nose.’
And, to make it even more interesting, it works the other way around too. The principle of ‘touch,’ for example, is fully available and present to you in the eternal principle of, say, ‘acting.’
It’s not all that easy to wrap your mind around this. But it’s a bit easier if you just test the idea with any experience that you have. When you taste chocolate, several things happen at once.
Specifically, there is a sensory experience that blends with a mental experience - all of your associations and history with chocolate, all of the patterns that you’ve laid down from thousands of similar experiences, constitute the experience of eating chocolate just as much as the actual raw sense data that your tongue fires off to your brain. There is a real sense in which there is no single experience of chocolate, there are as many experiences of chocolate as there are human beings on the planet.
Back to AI and the Internet
I think this leads you to a really interesting place when it comes to digital life, and the ideas that we face around AI and dead internet theory.
If Descartes is right, and there is no need for a human thought to come with a particular human body, then there is a real problem here. It is easy to imagine dissolving into the slop soup of recycled content, with nothing to ground you, no point of reference, no materiality or grounding to anything that you experience when you sit in front of a screen.
But if Abhinava is right, then I think the problem is quite illusory. I think dead internet theory is a really useful way to point out the problem of digital diet. AI slop is the mental equivalent of MacDonald’s French Fries. Addictive, useful in a pinch but extremely bad for you overall.
Even good digital content is not that good for you. If you constantly feed your nervous system content that has no grounding in embodied experience, it’s like living on a diet of carrots and water. Kind of fine for a while but eventually you’ll turn purple.
In other words, the solution to balancing your digital diet is not something different, digitally. It is something different, bodily. The only way out of the Cartesian spiral is to see the body and the mind as a single thing with interpenetrating parts, the way Abhinava did.
Digital content is fine, even AI slop is fine, if you take it down into the body, into your actual experience of life, and test it there.
What comes next?
I suspect that we are entering a time when people, especially young people, will increasingly reject digital content in pursuit of analogue experience. They will see that it is not only unreliable, it is un-nourishing and imbalanced.
You see this already in the rise of celluloid photography, lo-fi forums like Reddit, unpolished YouTube rambles, the sales of dumb phones, and so on.
The only thing that maybe works against this idea is the possibility that the AI themselves will get their own bodies. And start interpreting the world through their own sensory matrices. If this happens, and the robots become part of our embodied world in bodies of their own, then I’m really not sure what happens next.
Neil



