Connect with us

Artificial Intelligence

The RoboBee Lands Safely: A Breakthrough in Microbotics

A recently created RoboBee is now outfitted with its most reliable landing gear to date, inspired by one of nature’s most graceful landers: the crane fly. The team has given their flying robot a set of long, jointed legs that help ease its transition from air to ground. The robot has also received an updated controller that helps it decelerate on approach, resulting in a gentle plop-down.

Avatar photo

Published

on

The Harvard RoboBee has long been a marvel of microbotics, capable of flight, diving, and hovering like a real insect. But what good is the miracle of flight without a safe way to land? The RoboBee’s creators have now overcome this hurdle with their most reliable landing gear yet, inspired by nature’s own graceful landers: the crane fly.

Led by Robert Wood, the team has given their flying robot a set of long, jointed legs that help ease its transition from air to ground. This breakthrough protects the delicate piezoelectric actuators – energy-dense “muscles” deployed for flight that are easily fractured by external forces from rough landings and collisions.

The RoboBee’s previous iterations had suffered significant ground effect, or instability as a result of air vortices from its flapping wings. This problem was addressed by Christian Chan, a graduate student who led the mechanical redesign of the robot, and Nak-seung Patrick Hyun, a postdoctoral researcher who led controlled landing tests on a leaf and rigid surfaces.

Their paper describes improvement of the robot’s controller to adapt to ground effects as it approaches, an effort that seeks to minimize velocity before impact and dissipate energy quickly after. This innovation builds upon nature-inspired mechanical upgrades for skillful flight and graceful landing on various terrains.

The team chose the crane fly, a relatively slow-moving and harmless insect that emerges from spring to fall, as their inspiration. They noted its long, jointed appendages that likely give the insects the ability to dampen landings. This design was replicated in prototypes of different leg architectures, settling on designs similar to a crane fly’s.

The success of the RoboBee is a testament to the interface between biology and robotics. Alyssa Hernandez, a postdoctoral researcher with expertise in insect locomotion, notes that this platform can be used as a tool for biological research, producing studies that test biomechanical hypotheses.

Currently, the RoboBee stays tethered to off-board control systems, but the team will continue to focus on scaling up the vehicle and incorporating onboard electronics to give the robot sensor, power, and control autonomy. This three-pronged holy grail would allow the RoboBee platform to truly take off, paving the way for future applications in environmental monitoring, disaster surveillance, and even artificial pollination.

Artificial Intelligence

Riding the Tides: Scientists Develop Simple Algorithm for Underwater Robots to Harness Ocean Currents

Engineers have taught a simple submarine robot to take advantage of turbulent forces to propel itself through water.

Avatar photo

Published

on

By

Researchers at Caltech have made a breakthrough in developing a simple algorithm for underwater robots to harness the power of ocean currents. Led by John Dabiri, the Centennial Professor of Aeronautics and Mechanical Engineering, the team has successfully created a system that allows small autonomous underwater vehicles (AUVs) to ride on turbulent water currents rather than fighting against them.

The researchers began by studying how jellyfish navigate through the ocean using their unique ability to traverse and plumb the depths. They outfitted these creatures with electronics and prosthetic “hats” to carry small payloads and report findings back to the surface. However, they soon realized that jellyfish do not have a brain and therefore cannot make decisions about how to navigate.

To address this limitation, Dabiri’s team developed what would be considered the equivalent of a brain for an AUV using artificial intelligence (AI). This allowed the robots to make decisions underwater and potentially take advantage of environmental flows. However, they soon discovered that AI was not the most efficient solution for their problem.

Enter Peter Gunnarson, a former graduate student who returned to Dabiri’s lab with a simpler approach. He attached an accelerometer to CARL-Bot, an AUV developed years ago as part of his work on incorporating artificial intelligence into its navigation technique. By measuring how CARL-Bot was being pushed around by vortex rings (underwater equivalents of smoke rings), Gunnarson noticed that the robot would occasionally get caught up in a vortex ring and be propelled clear across the tank.

The team then developed simple commands to help CARL-Bot detect the relative location of a vortex ring and position itself to catch a ride. Alternatively, the bot can decide to get out of the way if it does not want to be pushed by a particular vortex ring. This process involves elements of biomimicry, mimicking nature’s ability to use environmental flows for energy conservation.

Dabiri hopes to marry this work with his hybrid jellyfish project, which aims to demonstrate a similar capability to take advantage of environmental flows and move more efficiently through the water. With this breakthrough, underwater robots can now ride the tides, reducing energy expenditure and increasing their efficiency in navigating the ocean depths.

Continue Reading

Artificial Intelligence

Shedding Light on Shadow Branches: Revolutionizing Computing Efficiency in Modern Data Centers

Researchers have developed a new technique called ‘Skia’ to help computer processors better predict future instructions and improve computing performance.

Avatar photo

Published

on

By

The collaboration between trailblazing engineers and industry professionals has led to a groundbreaking technique called Skia, which may transform the future of computing efficiency for modern data centers.

In data centers, large computers process massive amounts of data, but often struggle to keep up due to taxing workloads. This results in slower performance, causing search engines to generate answers more slowly or not at all. To address this issue, researchers at Texas A&M University have developed Skia in collaboration with Intel, AheadComputing, and Princeton.

The team includes Dr. Paul V. Gratz, a professor in the Department of Electrical and Computer Engineering, Dr. Daniel A. Jiménez, a professor in the Department of Computer Science and Engineering, and Chrysanthos Pepi, a graduate student in the Department of Electrical and Computer Engineering.

Processing instructions has become a major bottleneck in modern processor design,” Gratz said. “We developed Skia to better predict what’s coming next and alleviate that bottleneck.” Skia can not only help better predict future instructions but also improve the throughput of instructions on the system, leading to quicker performance and less power consumption for the data center.

Think of throughput in terms of being a server in a restaurant,” Gratz said. “You have lots and lots of jobs to do. How many tasks can you complete or how many instructions can you execute per unit time? You want high throughput, especially for computing.”

Improving throughput can lead to quicker performance and less power consumption for the data center. In fact, making it up to 10% more efficient means a company previously needing to make 100 data centers around the country now only needs to make 90, which is 10 fewer data centers. That’s pretty significant. These data centers cost millions of dollars, and they consume roughly the equivalent of the entire output of a power plant.

Skia identifies and decodes these shadow branches in unused bytes, storing them in a memory area called the Shadow Branch Buffer, which can be accessed alongside the BTB. What makes this technique interesting is that most of the future instructions were already available, and we demonstrate that Skia, with a minimal hardware budget, can make data centers more efficient, nearly twice the performance improvement versus adding the same amount of storage to the existing hardware as we observe,” Pepi said.

Their findings, “Skia: Exposing Shadow Branches,” were published in one of the leading computer architecture conferences, the ACM International Conference on Architectural Support for Programming Languages and Operating Systems. The team also traveled to the Netherlands to present their work to colleagues from around the globe.

Funding for this research is administered by the Texas A&M Engineering Experiment Station (TEES), the official research agency for Texas A&M Engineering.

Continue Reading

Artificial Intelligence

Estimating Biological Age with AI: A New Frontier in Cancer Care and Beyond

Researchers developed FaceAge, an AI tool that calculate’s a patient biological age from a photo of their face. In a new study, the researchers tied FaceAge results to health outcomes in people with cancer: When FaceAge estimated a younger age than a cancer patient’s chronological age, the patient did significantly better after cancer treatment, whereas patients with older FaceAge estimates had worse survival outcomes.

Avatar photo

Published

on

Estimating Biological Age with AI: A New Frontier in Cancer Care and Beyond

Imagine having a tool that can accurately predict a person’s biological age based on their facial appearance. This might seem like science fiction, but a team of investigators at Mass General Brigham has made this concept a reality using artificial intelligence (AI) technology called FaceAge.

FaceAge uses deep learning algorithms to analyze photographs of individuals and estimate their biological age. The tool was trained on 58,851 photos of presumed healthy individuals from public datasets and tested in a cohort of 6,196 cancer patients from two centers, who had photographs taken at the start of radiotherapy treatment.

The results were striking: patients with cancer appeared significantly older than those without cancer, with an average FaceAge that was about five years older than their chronological age. Moreover, older FaceAge predictions were associated with worse overall survival outcomes across multiple cancer types.

This technology has significant implications for cancer care. By using FaceAge to inform clinical decision-making, healthcare providers may be able to tailor treatment plans more effectively to individual patients based on their estimated biological age and health status.

The researchers also found that FaceAge outperformed clinicians in predicting short-term life expectancies of patients receiving palliative radiotherapy. This highlights the potential for AI-driven tools like FaceAge to provide valuable insights in high-pressure situations, where accurate predictions are crucial.

While further research is needed before this technology can be used in real-world clinical settings, the possibilities are vast and exciting. By expanding its capabilities to predict diseases, general health status, and lifespan, FaceAge has the potential to revolutionize our approach to healthcare.

The co-senior authors of the study, Hugo Aerts, PhD, and Ray Mak, MD, emphasize that this technology opens up new avenues for biomarker discovery from photographs, with implications far beyond cancer care or predicting age. As we increasingly consider chronic diseases as diseases of aging, accurately predicting an individual’s aging trajectory becomes crucial for developing effective treatment plans.

The door to a whole new realm of biomedical innovation has been opened, and it is essential that this technology be developed within a strong regulatory and ethical framework to ensure its safe use in various applications, ultimately helping to save lives.

Continue Reading

Trending