Connect with us
We’re experimenting with AI-generated content to help deliver information faster and more efficiently.
While we try to keep things accurate, this content is part of an ongoing experiment and may not always be reliable.
Please double-check important details — we’re not responsible for how the information is used.

Communications

Hexagons for Data Protection: A New Method for Location Proofing without Personal Data Disclosure

Location data is considered particularly sensitive — its misuse can have serious consequences. Researchers have now developed a method that allows individuals to cryptographically prove their location — without revealing it. The foundation of this method is the so-called zero-knowledge proof with standardized floating-point numbers.

Avatar photo

Published

on

Hexagons for data protection is a novel method that protects individuals’ privacy while still providing verifiable location data. This innovative approach uses a hierarchical hexagonal grid system to divide the Earth’s surface into cells that can be represented at various resolutions, from broad regional levels down to individual street segments. The key feature of this method is its ability to combine precision and privacy in a practically usable way.

The researchers behind this breakthrough used zero-knowledge proofs, a mathematical concept that verifies the truth of a statement without revealing the underlying data. This was combined with standardized floating-point numbers, which ensured computational accuracy and avoided unintended deviations during complex operations. The proof can be computed in less than a second, making it efficient for practical use cases.

One example of an application is Peer-to-Peer Proximity Testing, where two people can determine whether they are in close physical proximity without revealing their exact position. In a prototype, a user can prove in just 0.26 seconds that they are near a specific region, with the desired level of precision adjustable to demonstrate being in a particular neighborhood or park.

This research contributes not only to location proofing but also to the broader field of cryptography. The developed floating-point zero-knowledge circuits are reusable and could be applied in other areas, such as verifying physical measurement data or secure machine learning systems. This opens up new possibilities for trusted systems, including digital healthcare, mobility applications, or identity protection.

Overall, hexagons for data protection offers a promising solution for preserving individuals’ privacy while still providing verifiable location data, making it an essential tool for various industries and applications.

Artificial Intelligence

Self-Powered Artificial Synapse Revolutionizes Machine Vision

Despite advances in machine vision, processing visual data requires substantial computing resources and energy, limiting deployment in edge devices. Now, researchers from Japan have developed a self-powered artificial synapse that distinguishes colors with high resolution across the visible spectrum, approaching human eye capabilities. The device, which integrates dye-sensitized solar cells, generates its electricity and can perform complex logic operations without additional circuitry, paving the way for capable computer vision systems integrated in everyday devices.

Avatar photo

Published

on

By

The human visual system has long been a source of inspiration for computer vision researchers, who aim to develop machines that can see and understand the world around them with the same level of efficiency and accuracy as humans. While machine vision systems have made significant progress in recent years, they still face major challenges when it comes to processing vast amounts of visual data while consuming minimal power.

One approach to overcoming these hurdles is through neuromorphic computing, which mimics the structure and function of biological neural systems. However, two major challenges persist: achieving color recognition comparable to human vision, and eliminating the need for external power sources to minimize energy consumption.

A recent breakthrough by a research team led by Associate Professor Takashi Ikuno from Tokyo University of Science has addressed these issues with a groundbreaking solution. Their self-powered artificial synapse is capable of distinguishing colors with remarkable precision, making it particularly suitable for edge computing applications where energy efficiency is crucial.

The device integrates two different dye-sensitized solar cells that respond differently to various wavelengths of light, generating its electricity via solar energy conversion. This self-powering capability makes it an attractive solution for industries such as autonomous vehicles, healthcare, and consumer electronics, where visual recognition capabilities are essential but power consumption is limited.

The researchers demonstrated the potential of their device in a physical reservoir computing framework, recognizing different human movements recorded in red, green, and blue with an impressive 82% accuracy. This achievement has significant implications for various industries, including autonomous vehicles, which could utilize these devices to efficiently recognize traffic lights, road signs, and obstacles.

In healthcare, self-powered artificial synapses could power wearable devices that monitor vital signs like blood oxygen levels with minimal battery drain. For consumer electronics, this technology could lead to smartphones and augmented/virtual reality headsets with dramatically improved battery life while maintaining sophisticated visual recognition capabilities.

The realization of low-power machine vision systems with color discrimination capabilities close to those of the human eye is within reach, thanks to this breakthrough research. The potential applications of self-powered artificial synapses are vast, and their impact will be felt across various industries in the years to come.

Continue Reading

Artificial Intelligence

World’s First Petahertz-Speed Phototransistor Achieved in Ambient Conditions

Researchers demonstrated a way to to manipulate electrons using pulses of light that last less than a trillionth of a second to record electrons bypassing a physical barrier almost instantaneously — a feat that redefines the potential limits of computer processing power.

Avatar photo

Published

on

By

Imagine a world where computers can process information at speeds a million times faster than today’s fastest processors. A team of scientists from the University of Arizona, in collaboration with international researchers, have made this vision a reality by creating the world’s first petahertz-speed phototransistor that operates in ambient conditions.

The breakthrough was achieved through a groundbreaking experiment where researchers used pulses of light to manipulate electrons in graphene, a material composed of a single layer of carbon atoms. By leveraging a quantum effect known as tunneling, they recorded electrons bypassing a physical barrier almost instantaneously, redefining the potential limits of computer processing power.

“This achievement represents a huge leap forward in the development of ultrafast computer technologies,” said Mohammed Hassan, an associate professor of physics and optical sciences at the University of Arizona. “By leaning on the discovery of quantum computers, we can develop hardware that matches the current revolution in information technology software. Ultrafast computers will greatly assist discoveries in space research, chemistry, health care, and more.”

The team was originally studying the electrical conductivity of modified samples of graphene when they discovered the unexpected result of electrons slipping through the material almost instantly. This near-instant “tunnelling” effect was captured and tracked in real-time, paving the way for the development of a petahertz-speed transistor.

Using a commercially available graphene phototransistor that was modified to introduce a special silicon layer, the researchers used a laser that switches off and on at a rate of 638 attoseconds to create what Hassan called “the world’s fastest petahertz quantum transistor.”

A transistor is a device that acts as an electronic switch or amplifier that controls the flow of electricity between two points and is fundamental to the development of modern electronics. For reference, a single attosecond is one-quintillionth of a second, making this achievement represent a big leap forward in the development of ultrafast computer technologies.

While some scientific advancements occur under strict conditions, including temperature and pressure, this new transistor performed in ambient conditions – opening the way to commercialization and use in everyday electronics. Hassan is working with Tech Launch Arizona to patent and market innovations related to this invention, aiming to collaborate with industry partners to realize a petahertz-speed transistor on a microchip.

The University of Arizona is already known for developing the world’s fastest electron microscope, and they hope to also be recognized for creating the first petahertz-speed transistor. This achievement has the potential to revolutionize computing as we know it, enabling faster processing speeds, improved efficiency, and breakthroughs in various fields.

Continue Reading

Artificial Intelligence

Empowering Robots with Human-Like Perception to Navigate Complex Terrain

Researchers have developed a novel framework named WildFusion that fuses vision, vibration and touch to enable robots to ‘sense’ and navigate complex outdoor environments much like humans do.

Avatar photo

Published

on

By

The human body is incredibly adept at navigating its surroundings. Our senses work together in harmony to help us avoid obstacles, predict potential dangers, and make our way through even the most challenging environments. We can feel the roughness of tree bark beneath our fingers, smell the sweet scent of blooming flowers, hear the gentle rustle of leaves, and see the intricate details of the world around us.

Robots, on the other hand, have long relied solely on visual information to move through their environment. However, outside of Hollywood, multisensory navigation has remained a significant challenge for machines. The forest, with its dense undergrowth, fallen logs, and ever-changing terrain, is a maze of uncertainty for traditional robots.

Researchers at Duke University have developed a novel framework called WildFusion that fuses vision, vibration, and touch to enable robots to “sense” complex outdoor environments much like humans do. This groundbreaking work has been accepted to the IEEE International Conference on Robotics and Automation (ICRA 2025), which will be held May 19-23, 2025, in Atlanta, Georgia.

WildFusion is built on a quadruped robot that integrates multiple sensing modalities, including an RGB camera, lidar, inertial sensors, contact microphones, and tactile sensors. The camera and lidar capture the environment’s geometry, color, distance, and other visual details. However, what makes WildFusion special is its use of acoustic vibrations and touch.

As the robot walks, contact microphones record the unique vibrations generated by each step, capturing subtle differences such as the crunch of dry leaves versus the soft squish of mud. Meanwhile, the tactile sensors measure how much force is applied to each foot, helping the robot sense stability or slipperiness in real-time. These added senses are complemented by the inertial sensor that collects acceleration data to assess how much the robot is wobbling, pitching, or rolling as it traverses uneven ground.

Each type of sensory data is then processed through specialized encoders and fused into a single, rich representation. At the heart of WildFusion is a deep learning model based on the idea of implicit neural representations. Unlike traditional methods that treat the environment as a collection of discrete points, this approach models complex surfaces and features continuously, allowing the robot to make smarter, more intuitive decisions about where to step, even when its vision is blocked or ambiguous.

WildFusion was tested at the Eno River State Park in North Carolina near Duke’s campus, successfully helping a robot navigate dense forests, grasslands, and gravel paths. The team plans to expand the system by incorporating additional sensors, such as thermal or humidity detectors, to further enhance a robot’s ability to understand and adapt to complex environments.

One of the key challenges for robotics today is developing systems that not only perform well in the lab but reliably function in real-world settings. WildFusion provides vast potential applications beyond forest trails, including disaster response across unpredictable terrains, inspection of remote infrastructure, and autonomous exploration.

WildFusion’s multimodal approach lets the robot “fill in the blanks” when sensor data is sparse or noisy, much like what humans do. The team plans to expand the system by incorporating additional sensors, such as thermal or humidity detectors, to further enhance a robot’s ability to understand and adapt to complex environments.

The success of WildFusion has been recognized by the IEEE International Conference on Robotics and Automation (ICRA 2025), which will be held May 19-23, 2025, in Atlanta, Georgia. The team plans to continue expanding the system and exploring new applications for this groundbreaking technology.

Continue Reading

Trending