Moravec’s Paradox
Over lunch, I watched Chelsea Finn discuss Moravec’s Paradox with people across age and skills groups from kids to expert researchers. It is the idea that humans think any task easy for humans is hard for robots and things that appear hard for humans are easy for robots. For example, physical tasks like stacking plates and folding laundry are easier for humans than robots but multiplying large numbers is harder for usThe funny thing is that I am writing this within Cursor, which automatically chucked in the sentence “Human intelligence is the ability to do things that are hard to do, and artificial intelligence is the ability to do things that are easy to do.” as a sidenote. I think it is a good way to think about the difference between human intelligence and artificial intelligence but it is hard to repeatedly do easy things. . It is something I had noted when we saw Lee Sedol beaten by AlphaGo- where was the robot that actually moved the pieces?
Overall, I thought the video was very engaging but I found a couple of points to be noteworthyHence, the note😜. . Chelsea told Stanford researcher, Jennifer Grannen, that a new class of “learned simulators” should be built that use only real world data to recreate physics; this is, of course, something that caught my eye as a dynamicist working on simulations based on first-principles physics. I can see the point she is trying to make as we have seen a bunch of benefits of using data in developing large language models but I am also generally a bit skeptical of the idea that we use data for everything when sensors are inherently noisy- if all we care about is outputs then this is fine but separating process (or signal) from noise is a deeper thing that I think we will lose. Perhaps I am also old-fashioned but the idea of forfeiting our ability to develop abstractions (based on human measurement-free sensors and observations) to computers is also a bit scaryNot in a way that I fear about the role of humans but that it will lead to a loss of taste and will eventually kill creativity .
The other thing I loved was Michael Frank describing his work on computationally modeling babies’ cognition to answer the question “How do babies become human?”. I also appreciated his notion that senses (e.g., vision and sound) are a low-level capability compared to the high-level tasks (eg deliberation), which is mediated by memory and language. As I write this along with AI-suggested sentences, it makes me wonder that even if AI-suggested sentences are human-like, they are genuinely far from being personalised to me. Because there is a black box between my mind/brain and my hands and another one between my hands and the notebook or computer. On that note, I will end with a fun little segue that just came to my mind as I wrote that last sentence: here is Neal Stephenson talking to Lex Fridman about writing by hand being superior to typing on a computer as there is a slower output that keeps up with his cognitive speed. And the thing I have noticed that it is hard to turn off the editor within me when I should merely be focused on writing.