Skip to main content

New MIT Study Challenges Long-Held Beliefs About How the Brain Sees the World

5 min read
1,007 words
Share:

A groundbreaking study from MIT is shaking up decades of neuroscience wisdom, revealing the brain’s “object recognition” pathway may also play a significant role in understanding spatial information—an insight that could revolutionize our approach to learning, artificial intelligence, and brain health around the world, including here in Thailand.

For years, scientists have believed the ventral visual stream, a key pathway in the human brain, is dedicated to recognizing objects—like a Starbucks cup on a Bangkok Skytrain or a rambutan vendor at the Chatuchak Market. This idea shaped not just neuroscience textbooks, but also inspired computer vision systems now used in everything from smartphones to smart cars. Yet, new research led by MIT graduate student Yudi Xie suggests the story is far more nuanced. Their findings, presented at the prestigious International Conference on Learning Representations, show that when deep learning models are trained not only to identify objects, but also to understand spatial features like location, rotation, and size, these models mirror neural activity in the ventral stream just as accurately as traditional object recognition models. In other words, the ventral stream might be wired for much more than recognizing faces or products—it could be a multifaceted toolkit for seeing and interacting with the world.

Why does this matter for Thai readers? The division of labor between brain pathways has shaped how we diagnose and treat brain injuries, how we approach childhood learning disabilities, and even how we design technologies for daily life. For example, Thai students struggling with visual-spatial reasoning might be helped by interventions historically reserved for object recognition, and vice versa. “This leaves wide open the question about what the ventral stream is being optimized for,” Xie told MIT News, suggesting the need for a more holistic view of visual processing in both research and classrooms worldwide (source: MIT News).

To appreciate the significance, it helps to revisit the brain’s visual system. Since the early 1980s, scientists have described two main “roads” for visual information: the ventral stream (“what is it?”) and the dorsal stream (“where is it?”). Thai medical students still learn this “two-stream hypothesis” in neurology and psychology classes. Over the past decade, powerful computational models called convolutional neural networks (CNNs) have been trained to replicate the ventral stream’s achievements, fueling advances in computer vision and artificial intelligence—from automatic quality checks in Thai rice mills to facial recognition at Suvarnabhumi Airport.

Until now, these CNNs were almost always optimized for categorizing objects. Their design reflected the deeply held assumption that the ventral stream’s sole purpose was to recognize what things are. Yet the new MIT experiment took a different approach. The research team created a massive dataset of synthetic images—think tea kettles, calculators, and other everyday items—superimposed on diverse backgrounds and labeled with precise information about their orientation, location, and size. They then trained CNNs on these “spatial tasks.” Astonishingly, these models predicted brain activity in the ventral stream as well as, or better than, models trained purely for object recognition.

What’s more, the researchers found that early and middle layers of spatial-task-trained CNNs are almost indistinguishable from those of classic object-recognition models. “It seems like they learn some very similar or unified representation in the early to middle layers,” Xie observed. Only at the final stages do these models conclusively diverge to focus on their respective tasks. This discovery suggests that the ventral stream may support a broader range of visual intelligence than previously understood—and that our brains, and perhaps our students’ brains, may be learning “more than one thing at a time” from the same neural hardware.

Prominent neuroscientists James DiCarlo and Joshua Tenenbaum, leaders in the field and co-authors of the study, believe these insights will not only reshape our neuroscientific theories, but also have practical implications for artificial intelligence. Thai start-ups and university labs working on robotics, AI-powered translation, smart farming, and surveillance may need to rethink how they “teach” machines to see—not just for object recognition, but for tasks requiring a nuanced grasp of space and context, crucial in the bustling, dynamic environments of Bangkok or Chiang Mai.

The findings also challenge educators and clinicians to reconsider interventions for Thai children facing learning difficulties. Approximate visual-spatial skills—such as understanding maps in social studies class or visualizing geometric relationships in math—could be improved with training regimes that leverage both object and spatial-processing strengths, rather than treating these skills as isolated. The same logic might apply to stroke or traumatic brain injury rehabilitation. For instance, traditional assessments in Thai hospitals often check for either object naming or line orientation separately; this research suggests such boundaries might be less distinct at the brain level than assumed.

From a cultural perspective, Thailand’s emphasis on rote memorization in education—รายการจำ จำ จำ—could be complemented with more “hands-on” spatial learning activities. Popular Thai crafts like fruit carving, umbrella making, and shadow puppetry inherently blend object and spatial skills, perhaps unconsciously harnessing the brain’s dual talents identified in the MIT study.

Looking forward, this research could contribute to early detection of learning disabilities and dementia in Thailand’s aging society, as more comprehensive brain-based tests are developed. It could also pave the way for smarter computer-vision systems in traffic management, agriculture, and even healthcare imaging, where recognizing both what and where an object is can be equally critical. The research team plans to refine their models further to distinguish subtle differences and hopes such work will sharpen our understanding of how the brain optimizes its visual toolkit.

For Thai readers, this new evidence highlights the importance of nurturing both object and spatial perception abilities, whether in the classroom, workplace, or everyday life. Teachers and parents can experiment with games and activities that challenge students to not only name things but also manipulate, rotate, and locate them in space—think puzzles (จิ๊กซอว์), origami, or virtual reality learning modules.

As Thailand’s society grows more technology-dependent and its workforce increasingly interfaces with AI systems, understanding and leveraging the brain’s full visual potential—and not just the ability to “see what something is”—could give Thais an intellectual and economic edge.

Source: MIT News

Related Articles

4 min read

Reading Unveiled: Groundbreaking Review Maps the Brain's Complex Reading Networks

news neuroscience

A sweeping new meta-analysis conducted by researchers at the Max Planck Institute for Human Cognitive and Brain Sciences has produced the most detailed map yet of the brain activity involved in reading, revealing distinct patterns of neural engagement for everything from individual letters to full texts. Summarizing findings from 163 previous studies, the review—recently published in Neuroscience & Biobehavioral Reviews—offers a nuanced framework that could influence education strategies and interventions for reading difficulties worldwide, including in Thailand (medicalxpress.com).

#Neuroscience #Reading #Education +7 more
6 min read

Seeing With Purpose: How Your Brain Shapes What You Perceive

news neuroscience

A groundbreaking new study has revealed that what you see isn’t just a reflection of the world—your brain actively decides “what it wants you to see”, and that decision can change depending on your immediate goals. This discovery, recently published in Nature Communications and spotlighted by Earth.com, challenges a long-held belief about vision, highlighting the brain’s astonishing ability to reshape perception in real time to suit our intentions and tasks (cited from Earth.com: https://www.earth.com/news/rethinking-vision-the-brain-sees-what-it-wants-to-see/).

#Neuroscience #VisualPerception #ThaiEducation +7 more
5 min read

Feeling Good Changes What We Remember: New Study Reveals Power of Positive Emotion on Memory

news neuroscience

A newly published international study has confirmed what many Thais may intuitively sense—feeling good can actually help us remember more, even when the subject matter itself is boring or meaningless. In ground-breaking research led by scientists from Hangzhou Normal University and Nanjing Normal University, the team demonstrated that positive emotions during learning measurably enhance our ability to recall information, offering potential life-changing applications for students, teachers, and anyone aiming to boost brain power (Neuroscience News; MedicalXpress; News-Medical).

#PositiveEmotions #Memory #Education +7 more

Medical Disclaimer: This article is for informational purposes only and should not be considered medical advice. Always consult with qualified healthcare professionals before making decisions about your health.