Connect with us

Computers & Math

The Eye-Brain Connection: How Our Thoughts Shape What We See

A new study by biomedical engineers and neuroscientists shows that the brain’s visual regions play an active role in making sense of information.

Avatar photo

Published

on

The way we see the world is not just about what our eyes take in; it’s also about how our brain processes that information. A new study has shed light on this complex relationship, revealing that the visual regions of the brain play an active role in making sense of what we see. This means that our thoughts and experiences can influence what we perceive, even before our prefrontal cortex (the part of the brain responsible for reasoning and decision-making) gets a chance to weigh in.

Imagine you’re at the grocery store, looking at a bag of carrots. Depending on your plans for the day – perhaps making a hearty winter stew or preparing for a Super Bowl party – your mind might immediately think of potatoes, parsnips, or buffalo wings. This is not just about categorizing an object; it’s about how our brain uses context and past experiences to shape what we see.

The study, led by Nuttida Rungratsameetaweemana, a biomedical engineer and neuroscientist at Columbia Engineering, has provided some of the clearest evidence yet that early sensory systems play a role in decision-making. This means that even before our brain’s prefrontal cortex kicks in, our visual system is already processing information and making connections based on what we’re thinking about.

The implications of this study are significant. It suggests that designing artificial intelligence (AI) systems that can adapt to new or unexpected situations might be more achievable than previously thought. By understanding how the brain’s visual regions interact with other parts of the brain, researchers may be able to develop AI systems that can learn and respond in a more human-like way.

In summary, this study highlights the complex relationship between our eyes, brain, and thoughts. It shows that what we see is not just about the physical world; it’s also about how our brain processes that information based on past experiences and current context.

Agriculture and Food

The Edible Aquatic Robot: Harnessing Nature’s Power to Monitor Waterways

An edible robot leverages a combination of biodegradable fuel and surface tension to zip around the water’s surface, creating a safe — and nutritious — alternative to environmental monitoring devices made from artificial polymers and electronics.

Avatar photo

Published

on

The Edible Aquatic Robot is a groundbreaking innovation developed by EPFL scientists, who have successfully created a biodegradable and non-toxic device to monitor waterways. This remarkable invention leverages the Marangoni effect, which allows aquatic insects to propel themselves across the surface of water, to create a safe and efficient alternative to traditional environmental monitoring devices made from artificial polymers and electronics.

The robot’s clever design takes advantage of a chemical reaction within a tiny detachable chamber that produces carbon dioxide gas. This gas enters a fuel channel, forcing the fuel out and creating a sudden reduction in water surface tension that propels the robot forward. The device can move freely around the surface of the water for several minutes, making it an ideal solution for monitoring waterways.

What makes this invention even more remarkable is its edible nature. The robot’s outer structure is made from fish food with a 30% higher protein content and 8% lower fat content than commercial pellets. This not only provides strength and rigidity to the device but also acts as nourishment for aquatic wildlife at the end of its lifetime.

The EPFL team envisions deploying these robots in large numbers, each equipped with biodegradable sensors to collect environmental data such as water pH, temperature, pollutants, and microorganisms. The researchers have even fabricated ‘left turning’ and ‘right turning’ variants by altering the fuel channel’s asymmetric design, allowing them to disperse the robots across the water’s surface.

This work is part of a larger innovation in edible robotics, with the Laboratory of Intelligent Systems publishing several papers on edible devices, including edible soft actuators as food manipulators and pet food, fluidic circuits for edible computation, and edible conductive ink for monitoring crop growth. The potential applications of these devices are vast, from stimulating cognitive development in aquatic pets to delivering nutrients or medication to fish.

As EPFL PhD student Shuhang Zhang notes, “The replacement of electronic waste with biodegradable materials is the subject of intensive study, but edible materials with targeted nutritional profiles and function have barely been considered, and open up a world of opportunities for human and animal health.” This groundbreaking innovation in edible aquatic robots has the potential to revolutionize the way we monitor waterways and promote sustainable development.

Continue Reading

Artificial Intelligence

“Paws-itive Progress: Amphibious Robotic Dog Breaks Ground in Mobility and Efficiency”

A team of researchers has unveiled a cutting-edge Amphibious Robotic Dog capable of roving across both land and water with remarkable efficiency.

Avatar photo

Published

on

The field of robotics has taken a significant leap forward with the development of an amphibious robotic dog, capable of efficiently navigating both land and water. This innovative creation was inspired by the remarkable mobility of mammals in aquatic environments.

Unlike existing amphibious robots that often draw inspiration from reptiles or insects, this robotic canine is based on the swimming style of dogs. This design choice has allowed it to overcome several limitations faced by insect-inspired designs, such as reduced agility and load capacity.

The key to the amphibious robot’s water mobility lies in its unique paddling mechanism, modeled after the natural swimming motion of dogs. By carefully balancing weight and buoyancy, the engineers have ensured stable and effective aquatic performance.

To test its capabilities, the researchers developed and experimented with three distinct paddling gaits:

* A doggy paddle method that prioritizes speed
* A trot-like style that focuses on stability
* A third gait that combines elements of both

Through extensive experimentation, it was found that the doggy paddle method proved superior for speed, achieving a maximum water speed of 0.576 kilometers per hour (kph). On land, the amphibious robotic dog reaches speeds of 1.26 kph, offering versatile mobility in amphibious environments.

“This innovation marks a big step forward in designing nature-inspired robots,” says Yunquan Li, corresponding author of the study. “Our robot dog’s ability to efficiently move through water and on land is due to its bioinspired trajectory planning, which mimics the natural paddling gait of real dogs.”

The implications of this technology are vast and exciting, with potential applications in environmental research, military vehicles, rescue missions, and more. As we continue to push the boundaries of what’s possible with robotics, it’s clear that the future holds much promise for innovation and discovery.

Continue Reading

Computers & Math

A Breakthrough in AR Glasses: One Glass, Full Color

Augmented-reality (AR) technology is rapidly finding its way into everyday life, from education and healthcare to gaming and entertainment. However, the core AR device remains bulky and heavy, making prolonged wear uncomfortable. A breakthrough now promises to change that. A research team has slashed both thickness and weight using a single-layer waveguide.

Avatar photo

Published

on

By

A breakthrough in augmented-reality (AR) technology has been made by POSTECH researchers, which could revolutionize the way we interact with everyday life. The core AR device, typically bulky and heavy, can now be designed to be thin and light, making prolonged wear comfortable.

One of the main hurdles to commercializing AR glasses was the waveguide, a crucial component that guides virtual images directly to the user’s eye. Conventional designs required separate layers for red, green, and blue light, leading to increased weight and thickness. POSTECH researchers have eliminated this need by developing an achromatic metagrating that handles all colors in a single glass layer.

The key to this innovation lies in an array of nanoscale silicon-nitride pillars whose geometry was finely tuned using a stochastic topology-optimization algorithm to steer light with maximum efficiency. In experiments, the researchers produced vivid full-color images using a 500-µm-thick single-layer waveguide – about one-hundredth the diameter of a human hair.

The new design erases color blur while outperforming multilayer optics in brightness and color uniformity. This breakthrough has significant implications for the commercialization of AR glasses, which could become as thin and light as ordinary eyewear. The era of truly everyday AR is now within reach.

“This work marks a key milestone for next-generation AR displays,” said Prof. Junsuk Rho. “Coupled with scalable, large-area fabrication, it brings commercialization within reach.”

The study was carried out by POSTECH’s Departments of Mechanical, Chemical and Electrical Engineering and the Graduate School of Interdisciplinary Bioscience & Bioengineering, in collaboration with the Visual Team at Samsung Research. It was published online on April 30, 2025, in Nature Nanotechnology.

This research was supported by POSCO Holdings N.EX.T Impact, Samsung Research, the Ministry of Trade, Industry and Energy’s Alchemist Project, the Ministry of Science and ICT’s Global Convergence Research Support Program, and the Mid-Career Researcher Program.

Continue Reading

Trending