4 Comments

I am more in the camp that physical embodiment is not necessary for intelligent systems. However, an agentic approach with world models and no-free-lunch virtual experiences (i.e., where the agent's actions in the environment interact in a pullback or pushout effect framework, similar to deep reinforcement learning, but a bit more abstract in that it is the actions that interact and not necessarily the agent itself, this sort of creates an abstract map that we can think of (as being or representing or corresponding to) the "intuition" that current models seem to lack) seems like a plausible way to create fully intelligent agents that live in simulated environments. Of course, fully interactive real-world applications require robotics, but I do not see this as a prerequisite for creating artificial intelligence.

Expand full comment

I know little about this area but plan on exploring it more at Risk & Progress soon. Honestly, the embodiment hypothesis does not ring true to me. That is, I don't think you need a "body" so much as you need senses. Even then, human sense are very limited and look how far we have gotten.

Expand full comment

This is currently a hotly debated topic in the neurosciences. The debate revolves around whether the brain's fundamental role is to generate our conscious experience of the world or to facilitate our movement and interaction within the world. Of course, the proposed dichotomy may be false, as these two views might not be necessarily opposing or contradictory.

Expand full comment

What I know about this comes from Yann LeCun. He seems to think AI definitely needs senses.

Perhaps an AI having a body would help it understand reality and how it’s rules are governed

Expand full comment