Why AI Can't Tie Its Shoes But Can Diagnose Cancer

Image by Eric Han on Unsplash
white and black bus with green pine tree scale model
5 min read

Discover why robots that diagnose diseases can't fold laundry and what this reveals about intelligence itself

AI excels at complex analytical tasks like diagnosing cancer but struggles with 'simple' physical tasks like tying shoes.

This paradox exists because evolution spent millions of years perfecting our basic motor skills while abstract reasoning is evolutionarily new.

AI thrives on pattern recognition in data but lacks the embodied intelligence that comes from physical interaction with the world.

Babies learn physics through experimentation and sensory feedback that current AI systems simply cannot replicate.

Understanding this paradox reveals that intelligence isn't singular but exists in many forms with different strengths.

Your smartphone can recognize your face in milliseconds, yet the world's most advanced robots struggle to fold a towel. AI systems can detect cancer cells invisible to human eyes, but they'd lose a shoelace-tying contest to any five-year-old. This bizarre reality isn't a glitch—it's revealing something profound about intelligence itself.

Welcome to Moravec's Paradox, where the 'hard' problems are easy and the 'easy' problems are nearly impossible. Understanding this counterintuitive truth doesn't just explain AI's quirky limitations—it reshapes how we think about human intelligence, evolution, and what makes our everyday abilities secretly miraculous.

Evolution's Hidden Homework Assignment

Hans Moravec, the roboticist who spotted this paradox in the 1980s, realized something shocking: the skills we consider 'basic' took evolution billions of years to perfect, while our 'advanced' abilities like chess or calculus are evolutionary newborns, barely a few thousand years old. It's like discovering that walking is actually harder than quantum physics—at least from nature's perspective.

Think about it: single-celled organisms were navigating their environment 3.5 billion years ago. Fish were coordinating fins and avoiding obstacles 500 million years ago. Our ancestors were grasping branches and judging distances 60 million years ago. But abstract reasoning? Mathematics? Those showed up basically yesterday in evolutionary time. Your ability to catch a ball represents more accumulated 'research and development' than all of human civilization combined.

This is why a computer beat the world chess champion in 1997, but we're still waiting for a robot that can reliably pick up a sock from your bedroom floor. Chess has rules. Socks don't. They're floppy, unpredictable, and exist in a world of infinite variations. Evolution spent eons teaching us to handle floppy, unpredictable things—AI has had about 70 years to figure it out from scratch.

Takeaway

When judging AI capabilities, flip your assumptions: tasks that feel automatic to you are probably AI's biggest challenges, while tasks that make your brain hurt might be trivial for a computer.

The Pattern-Spotting Superpower

Here's where AI gets its revenge on evolution: pattern recognition in data. While we spent millions of years learning to spot tigers in tall grass (useful!), we never evolved to spot statistical patterns in spreadsheets with 10,000 rows. AI doesn't need evolution—it just needs examples. Lots of examples.

Show an AI system a million images of skin cells—500,000 healthy, 500,000 cancerous—and it starts noticing differences humans can't even perceive. It's not 'smarter' than a dermatologist; it's more like giving someone superhuman vision that can see in 50 dimensions instead of three. The AI isn't diagnosing cancer the way doctors do—it's finding mathematical patterns in pixel arrangements that happen to correlate with disease.

This is why AI excels at any task that can be turned into pattern-matching: recognizing faces, transcribing speech, predicting customer behavior, or detecting fraud. These aren't 'intelligent' in the way we usually think about intelligence. The AI doesn't 'understand' cancer any more than your calculator 'understands' multiplication. But when the pattern is consistent and you have enough examples? AI becomes almost magical, finding needles in haystacks that humans didn't even know existed.

Takeaway

AI thrives when problems can be converted into pattern recognition with clear examples—if you can't define success with data, current AI will struggle more than a toddler.

The Babies Know Something We Don't

Watch a one-year-old for five minutes and you'll witness intelligence that makes GPT-4 look primitive. They're constantly experimenting: 'What happens if I drop this? What if I squeeze it? Can I fit it in my mouth?' They're not following a training dataset—they're actively building a physics engine in their heads through pure, chaotic experimentation.

This is embodied intelligence, and it's AI's achilles heel. A baby learns that water is wet, cups can spill, and towers fall down not through millions of labeled examples but through maybe a dozen messy breakfast experiences. They understand cause and effect, object permanence, and basic physics not as abstract concepts but as lived experiences. Their entire body is a sensor array feeding back information about texture, weight, resistance, and consequence.

Current AI systems are like brilliant minds trapped in jars—they can think about the world but can't touch it, can't break it, can't get feedback from poking it with a stick. Even our best robots are working with sensors and actuators that are primitive compared to a human hand, which has 17,000 touch receptors working in concert with instantaneous visual feedback and a lifetime of muscle memory. Until AI can learn by doing, not just by observing, tying shoes will remain harder than curing cancer.

Takeaway

The next breakthrough in AI won't come from bigger models or more data—it will come from giving AI bodies and letting them learn like babies do, through beautiful, instructive failure.

The paradox isn't really a paradox once you shift perspective. AI hasn't failed at being human—it's succeeded at being something genuinely alien, with strengths and weaknesses that mirror our own in reverse. It's the universe's way of showing us that intelligence isn't a ladder with humans at the top, but a vast space of different possible minds.

Next time you effortlessly tie your shoes, take a moment to appreciate the computational miracle happening in your fingers. You're performing a feat that would stump a supercomputer, using evolutionary software refined over millions of years. And next time AI does something amazing, remember: it's not becoming more human—it's teaching us that there are other ways to be intelligent.

This article is for general informational purposes only and should not be considered as professional advice. Verify information independently and consult with qualified professionals before making any decisions based on this content.

How was this article?

this article

You may also like

More from AIAccess