A groundbreaking study from MIT is shaking up decades of neuroscience wisdom, revealing the brain’s “object recognition” pathway may also play a significant role in understanding spatial information—an insight that could revolutionize our approach to learning, artificial intelligence, and brain health around the world, including here in Thailand.
For years, scientists have believed the ventral visual stream, a key pathway in the human brain, is dedicated to recognizing objects—like a Starbucks cup on a Bangkok Skytrain or a rambutan vendor at the Chatuchak Market. This idea shaped not just neuroscience textbooks, but also inspired computer vision systems now used in everything from smartphones to smart cars. Yet, new research led by MIT graduate student Yudi Xie suggests the story is far more nuanced. Their findings, presented at the prestigious International Conference on Learning Representations, show that when deep learning models are trained not only to identify objects, but also to understand spatial features like location, rotation, and size, these models mirror neural activity in the ventral stream just as accurately as traditional object recognition models. In other words, the ventral stream might be wired for much more than recognizing faces or products—it could be a multifaceted toolkit for seeing and interacting with the world.