Connect with us
We’re experimenting with AI-generated content to help deliver information faster and more efficiently.
While we try to keep things accurate, this content is part of an ongoing experiment and may not always be reliable.
Please double-check important details — we’re not responsible for how the information is used.

Child Development

Little Minds Big Learning: 15-Month-Old Infants Learn New Words for Objects from Conversations Alone

A new study by developmental scientists offers the first evidence that infants as young as 15 months can identify an object they have learned about from listening to language — even if the object remains hidden.

Avatar photo

Published

on

Researchers at Northwestern University and Harvard University have made a groundbreaking discovery about the way infants learn new words for objects. A study led by Sandra Waxman, senior author and Louis W. Menk Professor of Psychology, found that 15-month-old babies can identify an object they’ve learned about from listening to language, even if it’s not present in front of them.

Imagine a baby playing with blocks on the floor while listening to parents talk about kumquats, a novel fruit. Can the baby form an initial representation or “gist” about what kumquat means? The researchers sought to answer this question and more.

The study involved 134 infants, divided into two age groups: 12 months and 15 months. In a three-part task, babies were presented with words paired with images of familiar objects (e.g., apple, banana). Then, they heard a new word while an image of a novel object (e.g., kumquat) was hidden from their view.

The results showed that 15-month-olds looked longer at the novel fruit than the novel artifact, indicating that they had used context clues to identify which object was most likely the one referred to by the new word. This is significant because it suggests that even babies who are just beginning to say their first words can learn from language and form mental representations of objects and events never witnessed directly.

Waxman explained, “We’re asking whether infants, too, can use the conversational contexts in which a word occurs to begin to learn their meaning.” The study provides new insight into the developmental origins of the human capacity to learn about things that are not perceptually present.

The researchers’ findings highlight the power of language in infants’ daily lives. Babies often hear words that they don’t yet understand, and that they cannot “map” immediately to an object or event. However, this study shows that by 15 months, infants can spontaneously use linguistic context to build a gist of a new word’s meaning that will support subsequent learning.

As Waxman noted, “When we hear new words, like ‘kumquat’ in conversation when there are no kumquats around, we don’t waste the opportunity to home in its meaning. We now know this is also true about tiny babies.”

Autism

The Thalamic Feedback Loop: Unveiling the Brain’s Secret Pathway to Sensory Perception

Sometimes a gentle touch feels sharp and distinct, other times it fades into the background. This inconsistency isn’t just mood—it’s biology. Scientists found that the thalamus doesn’t just relay sensory signals—it fine-tunes how the brain responds to them, effectively changing what we feel. A hidden receptor in the cortex seems to prime neurons, making them more sensitive to touch.

Avatar photo

Published

on

By

The Thalamic Feedback Loop: Unveiling the Brain’s Secret Pathway to Sensory Perception

Have you ever wondered why a single sensory stimulus doesn’t always elicit the same sensation? Why touching an object outside our field of vision might be enough to identify it… or not? For decades, neuroscientists have been trying to understand this phenomenon. Recently, researchers from the University of Geneva (UNIGE) made a groundbreaking discovery that could explain why we perceive sensory information in varying degrees.

The study, published in Nature Communications, revealed a previously unknown form of communication between two regions of the brain: the thalamus and the somatosensory cortex. This new pathway is called the thalamic feedback loop, and it plays a crucial role in modulating the excitability of cortical neurons.

When we touch something, sensory signals from receptors in the skin are interpreted by the specialized region called the somatosensory cortex. On their way to it, these signals pass through a complex network of neurons, including the thalamus – a relay station that serves as a crucial structure in the brain.

However, what’s remarkable is that the thalamus also receives feedback from the cortex, forming a loop of reciprocal communication. This feedback loop is essential for adjusting our perception of sensory information. The researchers discovered that this loop can modulate the excitability of cortical neurons by making them more sensitive to stimuli.

The team used cutting-edge techniques such as imaging, optogenetics, pharmacology, and electrophysiology to record the electrical activity of tiny structures like dendrites. They found that glutamate released from thalamic projections binds to an alternative receptor located in a specific region of the cortical pyramidal neuron. This interaction alters its state of responsiveness, effectively priming it for future sensory input.

The implications of this discovery are profound. By demonstrating that a specific feedback loop between the somatosensory cortex and the thalamus can modulate the excitability of cortical neurons, the study suggests that thalamic pathways do not simply transmit sensory signals but also act as selective amplifiers of cortical activity.

This mechanism could contribute to understanding the perceptual flexibility observed in states of sleep or wakefulness when sensory thresholds vary. Its alteration might also play a role in certain pathologies such as autism spectrum disorders.

The discovery of this thalamic feedback loop opens new avenues for research and sheds light on one of the brain’s most complex secrets: how we perceive sensory information.

Continue Reading

Child Development

Pain Relief Without Pills? VR Nature Scenes Activate Brain’s Healing Switch

Stepping into a virtual forest or waterfall scene through VR could be the future of pain management. A new study shows that immersive virtual nature dramatically reduces pain sensitivity almost as effectively as medication. Researchers at the University of Exeter found that the more present participants felt in these 360-degree nature experiences, the stronger the pain-relieving effects. Brain scans confirmed that immersive VR scenes activated pain-modulating pathways, revealing that our brains can be coaxed into suppressing pain by simply feeling like we re in nature.

Avatar photo

Published

on

The use of virtual reality (VR) nature scenes has been found to relieve symptoms commonly experienced by individuals living with long-term pain, with those who felt more present during the experience showing the strongest effects. A recent study led by the University of Exeter discovered that immersive 360-degree nature films delivered via VR were almost twice as effective in reducing pain compared to 2D video images.

Long-term pain is notoriously difficult to treat and typically lasts for over three months. Researchers simulated this type of pain in healthy participants, finding that nature VR had an effect similar to that of painkillers, which endured for at least five minutes after the VR experience had ended.

Dr. Sam Hughes, Senior Lecturer in Pain Neuroscience at the University of Exeter, stated, “We’ve seen a growing body of evidence show that exposure to nature can help reduce short-term, everyday pain, but there has been less research into how this might work for people living with chronic or longer-term pain.” The study aimed to investigate the effect of prolonged exposure to a virtual reality nature scene on symptoms experienced during long-term pain sensitivity.

The study involved 29 healthy participants who were shown two types of nature scenes after experiencing electric shocks on their forearm, which simulated pain. On the first visit, they measured changes in pain over a 50-minute period following the electric shocks and showed how the healthy participants developed sensitivity to sharp pricking stimuli in the absence of any nature scenes.

On the second visit, they immersed the same participants in a 45-minute virtual reality 360-degree experience of Oregon’s waterfalls, specifically chosen to maximize therapeutic effects. The scene was compared to a 2D screen experience. Participants completed questionnaires on their experience of pain after watching the scenes and how present they felt in each experience.

On a separate visit, participants underwent MRI brain scans at the University of Exeter’s Mireille Gillings Neuroimaging Centre. Researchers administered a cold gel to illicit ongoing pain and then scanned participants to study how their brains responded.

The researchers found that the immersive VR experience significantly reduced the development and spread of feelings of pain sensitivity to pricking stimuli, and these pain-reducing effects were still present even at the end of the 45-minute experience. The more present the person felt during the VR experience, the stronger this pain-relieving effect was.

The fMRI brain scans also revealed that people with stronger connectivity in brain regions involved in modulating pain responses experienced less pain. The results suggest that nature scenes delivered using VR can help change how pain signals are transmitted in the brain and spinal cord during long-term pain conditions.

Dr. Sonia Medina, of the University of Exeter Medical School, stated, “We think VR has a particularly strong effect on reducing experience of pain because it’s so immersive. It really created that feeling of being present in nature – and we found the pain-reducing effect was greatest in people for whom that perception was strongest.” The study aims to lead to more research to investigate further how exposure to nature effects our pain responses, so we could one day see nature scenes incorporated into ways of reducing pain for people in settings like care homes or hospitals.

Continue Reading

Alternative Medicine

Unlocking the Secrets of Cryorhodopsins: How Arctic Microbes Could Revolutionize Neuroscience

In the frozen reaches of the planet—glaciers, mountaintops, and icy groundwater—scientists have uncovered strange light-sensitive molecules in tiny microbes. These “cryorhodopsins” can respond to light in ways that might let researchers turn brain cells on and off like switches. Some even glow blue, a rare and useful trait for medical applications. These molecules may help the microbes sense dangerous UV light in extreme environments, and scientists believe they could one day power new brain tech, like light-based hearing aids or next-level neuroscience tools—all thanks to proteins that thrive in the cold and shimmer under light.

Avatar photo

Published

on

Imagine the breathtaking landscapes of Arctic regions, where glaciers shimmer like diamonds and snow-capped mountains touch the sky. For structural biologist Kirill Kovalev, these frozen wonders are not just a sight to behold but also home to unusual molecules that could control brain cells’ activity.

Kovalev, an EIPOD Postdoctoral Fellow at EMBL Hamburg’s Schneider Group and EMBL-EBI’s Bateman Group, is passionate about solving biological problems. He has been studying rhodopsins, a group of colorful proteins found in aquatic microorganisms that enable them to harness sunlight. However, Kovalev’s discovery of cryorhodopsin proteins in Arctic microbes has opened up new avenues for research.

These extraordinary molecules have a unique dual function – they can sense UV light and pass on the signal to other parts of the cell. This property is unheard of among other rhodopsins, making cryorhodopsins truly remarkable. Kovalev’s team used advanced spectroscopy to show that cryorhodopsins are sensitive to UV light and can act as photosensors, allowing microbes to “see” this radiation.

The discovery of cryorhodopsins has raised hopes for new treatments in neuroscience. These proteins could potentially be used to develop optogenetic tools, which manipulate brain cells using light. This technology has the potential to revolutionize the treatment of neurological disorders such as Parkinson’s disease and epilepsy.

Kovalev’s journey to uncover the secrets of cryorhodopsins was not without its challenges. He had to overcome technical difficulties in studying these molecules at a microscopic level, using advanced techniques like 4D structural biology and protein activation by light. His team also had to work in almost complete darkness to prevent damage to the sensitive proteins.

Despite these hurdles, Kovalev’s discovery has sparked excitement in the scientific community. His unique approach to understanding cryorhodopsins has revealed the fascinating biology of these extraordinary molecules and their potential applications in neuroscience. As researchers continue to study cryorhodopsins, they may uncover even more secrets about how these proteins adapt to cold environments and what benefits they could hold for human health.

In conclusion, the discovery of cryorhodopsins is a groundbreaking achievement that has opened up new avenues for research in neuroscience. These extraordinary molecules have a unique dual function, allowing them to sense UV light and pass on the signal to other parts of the cell. As researchers continue to study these proteins, they may uncover even more secrets about their biology and potential applications in treating neurological disorders.

Continue Reading

Trending