Connect with us
We’re experimenting with AI-generated content to help deliver information faster and more efficiently.
While we try to keep things accurate, this content is part of an ongoing experiment and may not always be reliable.
Please double-check important details — we’re not responsible for how the information is used.

Artificial Intelligence

Empowering Robots with Human-Like Perception to Navigate Complex Terrain

Researchers have developed a novel framework named WildFusion that fuses vision, vibration and touch to enable robots to ‘sense’ and navigate complex outdoor environments much like humans do.

Avatar photo

Published

on

The human body is incredibly adept at navigating its surroundings. Our senses work together in harmony to help us avoid obstacles, predict potential dangers, and make our way through even the most challenging environments. We can feel the roughness of tree bark beneath our fingers, smell the sweet scent of blooming flowers, hear the gentle rustle of leaves, and see the intricate details of the world around us.

Robots, on the other hand, have long relied solely on visual information to move through their environment. However, outside of Hollywood, multisensory navigation has remained a significant challenge for machines. The forest, with its dense undergrowth, fallen logs, and ever-changing terrain, is a maze of uncertainty for traditional robots.

Researchers at Duke University have developed a novel framework called WildFusion that fuses vision, vibration, and touch to enable robots to “sense” complex outdoor environments much like humans do. This groundbreaking work has been accepted to the IEEE International Conference on Robotics and Automation (ICRA 2025), which will be held May 19-23, 2025, in Atlanta, Georgia.

WildFusion is built on a quadruped robot that integrates multiple sensing modalities, including an RGB camera, lidar, inertial sensors, contact microphones, and tactile sensors. The camera and lidar capture the environment’s geometry, color, distance, and other visual details. However, what makes WildFusion special is its use of acoustic vibrations and touch.

As the robot walks, contact microphones record the unique vibrations generated by each step, capturing subtle differences such as the crunch of dry leaves versus the soft squish of mud. Meanwhile, the tactile sensors measure how much force is applied to each foot, helping the robot sense stability or slipperiness in real-time. These added senses are complemented by the inertial sensor that collects acceleration data to assess how much the robot is wobbling, pitching, or rolling as it traverses uneven ground.

Each type of sensory data is then processed through specialized encoders and fused into a single, rich representation. At the heart of WildFusion is a deep learning model based on the idea of implicit neural representations. Unlike traditional methods that treat the environment as a collection of discrete points, this approach models complex surfaces and features continuously, allowing the robot to make smarter, more intuitive decisions about where to step, even when its vision is blocked or ambiguous.

WildFusion was tested at the Eno River State Park in North Carolina near Duke’s campus, successfully helping a robot navigate dense forests, grasslands, and gravel paths. The team plans to expand the system by incorporating additional sensors, such as thermal or humidity detectors, to further enhance a robot’s ability to understand and adapt to complex environments.

One of the key challenges for robotics today is developing systems that not only perform well in the lab but reliably function in real-world settings. WildFusion provides vast potential applications beyond forest trails, including disaster response across unpredictable terrains, inspection of remote infrastructure, and autonomous exploration.

WildFusion’s multimodal approach lets the robot “fill in the blanks” when sensor data is sparse or noisy, much like what humans do. The team plans to expand the system by incorporating additional sensors, such as thermal or humidity detectors, to further enhance a robot’s ability to understand and adapt to complex environments.

The success of WildFusion has been recognized by the IEEE International Conference on Robotics and Automation (ICRA 2025), which will be held May 19-23, 2025, in Atlanta, Georgia. The team plans to continue expanding the system and exploring new applications for this groundbreaking technology.

Artificial Intelligence

The Real-Life Kryptonite Found in Serbia – A Game-Changer for Earth’s Energy Transition

Deep in Serbia’s Jadar Valley, scientists discovered a mineral with an uncanny resemblance to Superman’s Kryptonite both in composition and name. Dubbed jadarite, this dull white crystal lacks the glowing green menace of its comic book counterpart but packs a punch in the real world. Rich in lithium and boron, jadarite could help supercharge the global transition to green energy.

Avatar photo

Published

on

By

The discovery of jadarite, a rare and fascinating mineral, has been hailed as “Earth’s kryptonite twin” due to its similarities to the fictional substance from the comic books. Found in the Jadar Valley of Serbia by exploration geologists from Rio Tinto in 2004, this sodium lithium boron silicate hydroxide mineral has immense potential for Earth’s energy transition away from fossil fuels.

Initially, jadarite didn’t match any known mineral at the time and was identified after analysis by the Natural History Museum in London and the National Research Council of Canada. It was officially recognized as a new mineral in 2006. While it shares some similarities with kryptonite, including its chemical formula LiNaSiB₃O₇(OH), jadarite is a much less supernatural dull white mineral that fluoresces pinkish-orange under UV light.

According to Michael Page, a scientist with Australia’s Nuclear Science and Technology Organisation (ANSTO), “the real jadarite has great potential as an important source of lithium and boron.” In fact, the Jadar deposit where it was first discovered is considered one of the largest lithium deposits in the world, making it a potential game-changer for the global green energy transition.

The work that ANSTO does focuses on how critical minerals like jadarite can be utilized to support Australian industry in a commercial capacity. They have produced battery-grade lithium chemicals from various mineral deposits, including spodumene, lepidolite, and even jadarite, ensuring that Australian miners receive the support they need to meet the challenges of the energy transition.

As the world continues to transition towards renewable energy sources, jadarite’s potential as a key component in this process cannot be overstated. Its discovery is a testament to human ingenuity and our ability to find innovative solutions to complex problems.

Continue Reading

Artificial Intelligence

Revolutionizing Quantum Computing with an Ultra-Thin Chip

Researchers at Harvard have created a groundbreaking metasurface that can replace bulky and complex optical components used in quantum computing with a single, ultra-thin, nanostructured layer. This innovation could make quantum networks far more scalable, stable, and compact. By harnessing the power of graph theory, the team simplified the design of these quantum metasurfaces, enabling them to generate entangled photons and perform sophisticated quantum operations — all on a chip thinner than a human hair. It’s a radical leap forward for room-temperature quantum technology and photonics.

Avatar photo

Published

on

By

In the quest for practical quantum computers and networks, photons have emerged as promising carriers of information at room temperature. However, controlling and coherently manipulating these particles within optical devices has proven notoriously difficult due to their inherently noisy nature. To overcome this hurdle, researchers from Harvard’s John A. Paulson School of Engineering and Applied Sciences have developed an innovative solution – a metasurface-based quantum photonics processor.

This groundbreaking device is the result of Federico Capasso’s research team, led by graduate student Kerolos M.A. Yousef. By harnessing the power of specially designed metasurfaces, flat devices etched with nanoscale light-manipulating patterns, they have created an ultra-thin upgrade for quantum-optical chips and setups.

One of the primary advantages of this design is its ability to miniaturize an entire optical setup into a single metasurface. This results in a robust and scalable system that offers numerous benefits, including cost-effectiveness, simplicity of fabrication, and low optical loss. The work has significant implications for quantum sensing, enabling “lab-on-a-chip” capabilities for fundamental science.

To tackle the complex mathematical challenges associated with this design, the researchers drew upon graph theory – a branch of mathematics that uses points and lines to represent connections and relationships. This allowed them to visually determine how photons interfere with each other and predict their effects in experiments.

The resulting paper was a collaboration with Marko Loncar’s lab, which provided expertise and equipment necessary for the project. Neal Sinclair, a research scientist on the team, expressed excitement about this approach, stating that it could efficiently scale optical quantum computers and networks – their biggest challenge compared to other platforms like superconductors or atoms.

This groundbreaking research received support from federal sources, including the Air Force Office of Scientific Research (AFOSR), under award No. FA9550-21-1-0312. The work was performed at the Harvard University Center for Nanoscale Systems.

Continue Reading

Artificial Intelligence

Google’s Deepfake Hunter: Exposing Manipulated Videos with a Universal Detector

AI-generated videos are becoming dangerously convincing and UC Riverside researchers have teamed up with Google to fight back. Their new system, UNITE, can detect deepfakes even when faces aren’t visible, going beyond traditional methods by scanning backgrounds, motion, and subtle cues. As fake content becomes easier to generate and harder to detect, this universal tool might become essential for newsrooms and social media platforms trying to safeguard the truth.

Avatar photo

Published

on

By

In an era where manipulated videos can spread disinformation, bully people, and incite harm, researchers at the University of California, Riverside (UCR), have created a powerful new system to expose these fakes. Amit Roy-Chowdhury, a professor of electrical and computer engineering, and doctoral candidate Rohit Kundu, teamed up with Google scientists to develop an artificial intelligence model that detects video tampering – even when manipulations go far beyond face swaps and altered speech.

Their new system, called the Universal Network for Identifying Tampered and synthEtic videos (UNITE), detects forgeries by examining not just faces but full video frames, including backgrounds and motion patterns. This analysis makes it one of the first tools capable of identifying synthetic or doctored videos that do not rely on facial content.

“Deepfakes have evolved,” Kundu said. “They’re not just about face swaps anymore. People are now creating entirely fake videos – from faces to backgrounds – using powerful generative models. Our system is built to catch all of that.”

UNITE’s development comes as text-to-video and image-to-video generation have become widely available online. These AI platforms enable virtually anyone to fabricate highly convincing videos, posing serious risks to individuals, institutions, and democracy itself.

“It’s scary how accessible these tools have become,” Kundu said. “Anyone with moderate skills can bypass safety filters and generate realistic videos of public figures saying things they never said.”

Kundu explained that earlier deepfake detectors focused almost entirely on face cues. If there’s no face in the frame, many detectors simply don’t work. But disinformation can come in many forms. Altering a scene’s background can distort the truth just as easily.

To address this, UNITE uses a transformer-based deep learning model to analyze video clips. It detects subtle spatial and temporal inconsistencies – cues often missed by previous systems. The model draws on a foundational AI framework known as SigLIP, which extracts features not bound to a specific person or object. A novel training method, dubbed “attention-diversity loss,” prompts the system to monitor multiple visual regions in each frame, preventing it from focusing solely on faces.

The result is a universal detector capable of flagging a range of forgeries – from simple facial swaps to complex, fully synthetic videos generated without any real footage. It’s one model that handles all these scenarios,” Kundu said. “That’s what makes it universal.”

The researchers presented their findings at the high-ranking 2025 Conference on Computer Vision and Pattern Recognition (CVPR) in Nashville, Tenn. Their paper, led by Kundu, outlines UNITE’s architecture and training methodology.

While still in development, UNITE could soon play a vital role in defending against video disinformation. Potential users include social media platforms, fact-checkers, and newsrooms working to prevent manipulated videos from going viral.

“People deserve to know whether what they’re seeing is real,” Kundu said. “And as AI gets better at faking reality, we have to get better at revealing the truth.”

Continue Reading

Trending