Connect with us

Artificial Intelligence

Revolutionizing American Sign Language Translation with AI-Powered Ring

A research team has developed an artificial intelligence-powered ring equipped with micro-sonar technology that can continuously and in real time track finger-spelling in American Sign Language (ASL).

Avatar photo

Published

on

Revolutionizing American Sign Language Translation with AI-Powered Ring

A team of researchers from Cornell University has developed an innovative artificial intelligence (AI)-powered ring called SpellRing. This groundbreaking device is equipped with micro-sonar technology, allowing it to continuously and in real-time track fingerspelling in American Sign Language (ASL). The implications are vast – this technology could revolutionize the way ASL is translated by enabling seamless recognition of entire signed words and sentences.

The primary purpose of SpellRing is to facilitate text entry into computers or smartphones via fingerspelling, which is a crucial aspect of ASL. Proper nouns, names, and technical terms often require this method, as there are no corresponding signs for them in the language. The researchers behind SpellRing aimed to create a device that would be practical and user-friendly for the deaf and hard-of-hearing community.

According to Hyunchul Lim, lead author of the research paper “SpellRing: Recognizing Continuous Fingerspelling in American Sign Language using a Ring,” which will be presented at the Association of Computing Machinery’s conference on Human Factors in Computing Systems (CHI), many existing technologies that recognize fingerspelling in ASL have not been adopted by the target community due to their bulky and impractical design. The team sought to address this issue by developing a single ring that could capture all the subtle and complex finger movements involved in ASL.

SpellRing is worn on the thumb and features a microphone and speaker, which send and receive inaudible sound waves to track hand and finger movements. A mini gyroscope further tracks the hand’s motion, while a proprietary deep-learning algorithm processes the sonar images to predict the ASL fingerspelled letters in real-time with similar accuracy as many existing systems.

The researchers conducted an evaluation of SpellRing using 20 experienced and novice ASL signers, who naturally and continuously fingerspelled more than 20,000 words of varying lengths. The results showed that SpellRing’s accuracy rate was between 82% and 92%, depending on the difficulty of the words.

“We’ve bridged some of the gap between the technical community who develop tools and the target community who use them,” said Cheng Zhang, assistant professor of information science and a paper co-author. “We designed SpellRing for target users who evaluated it.”

In the future, Lim plans to integrate the micro-sonar system into eyeglasses to capture upper body movements and facial expressions, aiming to create a more comprehensive ASL translation system. “Deaf and hard-of-hearing people use more than their hands for ASL. They use facial expressions, upper body movements and head gestures,” said Lim. “ASL is a very complicated, complex visual language.”

This research was funded by the National Science Foundation.

Artificial Intelligence

Riding the Tides: Scientists Develop Simple Algorithm for Underwater Robots to Harness Ocean Currents

Engineers have taught a simple submarine robot to take advantage of turbulent forces to propel itself through water.

Avatar photo

Published

on

By

Researchers at Caltech have made a breakthrough in developing a simple algorithm for underwater robots to harness the power of ocean currents. Led by John Dabiri, the Centennial Professor of Aeronautics and Mechanical Engineering, the team has successfully created a system that allows small autonomous underwater vehicles (AUVs) to ride on turbulent water currents rather than fighting against them.

The researchers began by studying how jellyfish navigate through the ocean using their unique ability to traverse and plumb the depths. They outfitted these creatures with electronics and prosthetic “hats” to carry small payloads and report findings back to the surface. However, they soon realized that jellyfish do not have a brain and therefore cannot make decisions about how to navigate.

To address this limitation, Dabiri’s team developed what would be considered the equivalent of a brain for an AUV using artificial intelligence (AI). This allowed the robots to make decisions underwater and potentially take advantage of environmental flows. However, they soon discovered that AI was not the most efficient solution for their problem.

Enter Peter Gunnarson, a former graduate student who returned to Dabiri’s lab with a simpler approach. He attached an accelerometer to CARL-Bot, an AUV developed years ago as part of his work on incorporating artificial intelligence into its navigation technique. By measuring how CARL-Bot was being pushed around by vortex rings (underwater equivalents of smoke rings), Gunnarson noticed that the robot would occasionally get caught up in a vortex ring and be propelled clear across the tank.

The team then developed simple commands to help CARL-Bot detect the relative location of a vortex ring and position itself to catch a ride. Alternatively, the bot can decide to get out of the way if it does not want to be pushed by a particular vortex ring. This process involves elements of biomimicry, mimicking nature’s ability to use environmental flows for energy conservation.

Dabiri hopes to marry this work with his hybrid jellyfish project, which aims to demonstrate a similar capability to take advantage of environmental flows and move more efficiently through the water. With this breakthrough, underwater robots can now ride the tides, reducing energy expenditure and increasing their efficiency in navigating the ocean depths.

Continue Reading

Artificial Intelligence

Shedding Light on Shadow Branches: Revolutionizing Computing Efficiency in Modern Data Centers

Researchers have developed a new technique called ‘Skia’ to help computer processors better predict future instructions and improve computing performance.

Avatar photo

Published

on

By

The collaboration between trailblazing engineers and industry professionals has led to a groundbreaking technique called Skia, which may transform the future of computing efficiency for modern data centers.

In data centers, large computers process massive amounts of data, but often struggle to keep up due to taxing workloads. This results in slower performance, causing search engines to generate answers more slowly or not at all. To address this issue, researchers at Texas A&M University have developed Skia in collaboration with Intel, AheadComputing, and Princeton.

The team includes Dr. Paul V. Gratz, a professor in the Department of Electrical and Computer Engineering, Dr. Daniel A. Jiménez, a professor in the Department of Computer Science and Engineering, and Chrysanthos Pepi, a graduate student in the Department of Electrical and Computer Engineering.

Processing instructions has become a major bottleneck in modern processor design,” Gratz said. “We developed Skia to better predict what’s coming next and alleviate that bottleneck.” Skia can not only help better predict future instructions but also improve the throughput of instructions on the system, leading to quicker performance and less power consumption for the data center.

Think of throughput in terms of being a server in a restaurant,” Gratz said. “You have lots and lots of jobs to do. How many tasks can you complete or how many instructions can you execute per unit time? You want high throughput, especially for computing.”

Improving throughput can lead to quicker performance and less power consumption for the data center. In fact, making it up to 10% more efficient means a company previously needing to make 100 data centers around the country now only needs to make 90, which is 10 fewer data centers. That’s pretty significant. These data centers cost millions of dollars, and they consume roughly the equivalent of the entire output of a power plant.

Skia identifies and decodes these shadow branches in unused bytes, storing them in a memory area called the Shadow Branch Buffer, which can be accessed alongside the BTB. What makes this technique interesting is that most of the future instructions were already available, and we demonstrate that Skia, with a minimal hardware budget, can make data centers more efficient, nearly twice the performance improvement versus adding the same amount of storage to the existing hardware as we observe,” Pepi said.

Their findings, “Skia: Exposing Shadow Branches,” were published in one of the leading computer architecture conferences, the ACM International Conference on Architectural Support for Programming Languages and Operating Systems. The team also traveled to the Netherlands to present their work to colleagues from around the globe.

Funding for this research is administered by the Texas A&M Engineering Experiment Station (TEES), the official research agency for Texas A&M Engineering.

Continue Reading

Artificial Intelligence

Estimating Biological Age with AI: A New Frontier in Cancer Care and Beyond

Researchers developed FaceAge, an AI tool that calculate’s a patient biological age from a photo of their face. In a new study, the researchers tied FaceAge results to health outcomes in people with cancer: When FaceAge estimated a younger age than a cancer patient’s chronological age, the patient did significantly better after cancer treatment, whereas patients with older FaceAge estimates had worse survival outcomes.

Avatar photo

Published

on

Estimating Biological Age with AI: A New Frontier in Cancer Care and Beyond

Imagine having a tool that can accurately predict a person’s biological age based on their facial appearance. This might seem like science fiction, but a team of investigators at Mass General Brigham has made this concept a reality using artificial intelligence (AI) technology called FaceAge.

FaceAge uses deep learning algorithms to analyze photographs of individuals and estimate their biological age. The tool was trained on 58,851 photos of presumed healthy individuals from public datasets and tested in a cohort of 6,196 cancer patients from two centers, who had photographs taken at the start of radiotherapy treatment.

The results were striking: patients with cancer appeared significantly older than those without cancer, with an average FaceAge that was about five years older than their chronological age. Moreover, older FaceAge predictions were associated with worse overall survival outcomes across multiple cancer types.

This technology has significant implications for cancer care. By using FaceAge to inform clinical decision-making, healthcare providers may be able to tailor treatment plans more effectively to individual patients based on their estimated biological age and health status.

The researchers also found that FaceAge outperformed clinicians in predicting short-term life expectancies of patients receiving palliative radiotherapy. This highlights the potential for AI-driven tools like FaceAge to provide valuable insights in high-pressure situations, where accurate predictions are crucial.

While further research is needed before this technology can be used in real-world clinical settings, the possibilities are vast and exciting. By expanding its capabilities to predict diseases, general health status, and lifespan, FaceAge has the potential to revolutionize our approach to healthcare.

The co-senior authors of the study, Hugo Aerts, PhD, and Ray Mak, MD, emphasize that this technology opens up new avenues for biomarker discovery from photographs, with implications far beyond cancer care or predicting age. As we increasingly consider chronic diseases as diseases of aging, accurately predicting an individual’s aging trajectory becomes crucial for developing effective treatment plans.

The door to a whole new realm of biomedical innovation has been opened, and it is essential that this technology be developed within a strong regulatory and ethical framework to ensure its safe use in various applications, ultimately helping to save lives.

Continue Reading

Trending