Connect with us

Artificial Intelligence

“Hopping into Action: MIT Researchers Develop Tiny Robot That Can Leap Over Obstacles with Ease”

A hopping, insect-sized robot can jump over gaps or obstacles, traverse rough, slippery, or slanted surfaces, and perform aerial acrobatic maneuvers, while using a fraction of the energy required for flying microbots.

Avatar photo

Published

on

Here is the rewritten article:

Hopping gives this tiny robot a leg up

Insect-scale robots can squeeze into places their larger counterparts can’t, like deep into a collapsed building to search for survivors after an earthquake. However, as they move through the rubble, tiny crawling robots might encounter tall obstacles they can’t climb over or slanted surfaces they will slide down.

To get the best of both locomotion methods, MIT researchers developed a hopping robot that can leap over tall obstacles and jump across slanted or uneven surfaces, while using far less energy than an aerial robot.

The hopping robot, which is smaller than a human thumb and weighs less than a paperclip, has a springy leg that propels it off the ground, and four flapping-wing modules that give it lift and control its orientation.

The robot can jump about 20 centimeters into the air, or four times its height, at a lateral speed of about 30 centimeters per second. It has no trouble hopping across ice, wet surfaces, and uneven soil, or even onto a hovering drone. All the while, the hopping robot consumes about 60 percent less energy than its flying cousin.

Due to its light weight and durability, and the energy efficiency of the hopping process, the robot could carry about 10 times more payload than a similar-sized aerial robot, opening the door to many new applications.

The researchers put the hopping robot, and its control mechanism, to the test on a variety of surfaces, including grass, ice, wet glass, and uneven soil — it successfully traversed all surfaces. The robot could even hop on a surface that was dynamically tilting.

“The robot doesn’t really care about the angle of the surface it is landing on. As long as it doesn’t slip when it strikes the ground, it will be fine,” said Yi-Hsuan (Nemo) Hsiao, an MIT graduate student and co-lead author of a paper on the hopping robot.

Since the controller can handle multiple terrains, the robot can easily transition from one surface to another without missing a beat. For instance, hopping across grass requires more thrust than hopping across glass, since blades of grass cause a damping effect that reduces its jump height.

The researchers showcased its agility by demonstrating acrobatic flips. The featherweight robot could also hop onto an airborne drone without damaging either device, which could be useful in collaborative tasks.

In addition, while the team demonstrated a hopping robot that carried twice its weight, the maximum payload may be much higher. Adding more weight doesn’t hurt the robot’s efficiency. Rather, the efficiency of the spring is the most significant factor that limits how much the robot can carry.

Moving forward, the researchers plan to leverage its ability to carry heavy loads by installing batteries, sensors, and other circuits onto the robot, in the hopes of enabling it to hop autonomously outside the lab.

This research is funded, in part, by the U.S. National Science Foundation and the MIT MISTI program. Chirarattananon was supported by the Research Grants Council of the Hong Kong Special Administrative Region of China. Hsiao is supported by a MathWorks Fellowship, and Kim is supported by a Zakhartchenko Fellowship.

Artificial Intelligence

Robot see, robot do: A Revolutionary System that Learns from How-to Videos

Researchers have developed a new robotic framework powered by artificial intelligence — called RHyME (Retrieval for Hybrid Imitation under Mismatched Execution) — that allows robots to learn tasks by watching a single how-to video.

Avatar photo

Published

on

Robotics has long been plagued by the need for precise, step-by-step directions, making them finicky learners. However, researchers at Cornell University have developed a revolutionary new framework called RHyME (Retrieval for Hybrid Imitation under Mismatched Execution) that allows robots to learn tasks by watching a single how-to video. This groundbreaking system supercharges a robotic system to use its own memory and connect the dots when performing tasks it has viewed only once, drawing inspiration from videos it has seen.

The RHyME system is powered by artificial intelligence and can significantly reduce the time, energy, and money needed to train robots. According to researchers, one of the annoying things about working with robots is collecting so much data on the robot doing different tasks. This new approach, a branch of machine learning called “imitation learning,” allows humans to look at other people as inspiration, just like we do in real life.

Home robot assistants are still a long way off because they lack the wits to navigate the physical world and its countless contingencies. To get robots up to speed, researchers like Kushal Kedia and Sanjiban Choudhury are training them with what amounts to how-to videos – human demonstrations of various tasks in a lab setting. The hope with this approach is that robots will learn a sequence of tasks faster and be able to adapt to real-world environments.

“Our work is like translating French to English — we’re translating any given task from human to robot,” said senior author Sanjiban Choudhury, assistant professor of computer science. This translation task still faces a broader challenge: Humans move too fluidly for a robot to track and mimic, and training robots with video requires gobs of it.

RHyME is the team’s answer – a scalable approach that makes robots less finicky and more adaptive. It supercharges a robotic system to use its own memory and connect the dots when performing tasks it has viewed only once by drawing on videos it has seen. For example, a RHyME-equipped robot shown a video of a human fetching a mug from the counter and placing it in a nearby sink will comb its bank of videos and draw inspiration from similar actions – like grasping a cup and lowering a utensil.

RHyME paves the way for robots to learn multiple-step sequences while significantly lowering the amount of robot data needed for training. RHyME requires just 30 minutes of robot data; in a lab setting, robots trained using the system achieved a more than 50% increase in task success compared to previous methods, the researchers said.

With the development of RHyME, we may soon see robots that can learn and adapt to real-world environments with greater ease. This breakthrough has the potential to revolutionize various industries and aspects of our lives, making robots more efficient and effective.

Continue Reading

Artificial Intelligence

The RoboBee Lands Safely: A Breakthrough in Microbotics

A recently created RoboBee is now outfitted with its most reliable landing gear to date, inspired by one of nature’s most graceful landers: the crane fly. The team has given their flying robot a set of long, jointed legs that help ease its transition from air to ground. The robot has also received an updated controller that helps it decelerate on approach, resulting in a gentle plop-down.

Avatar photo

Published

on

The Harvard RoboBee has long been a marvel of microbotics, capable of flight, diving, and hovering like a real insect. But what good is the miracle of flight without a safe way to land? The RoboBee’s creators have now overcome this hurdle with their most reliable landing gear yet, inspired by nature’s own graceful landers: the crane fly.

Led by Robert Wood, the team has given their flying robot a set of long, jointed legs that help ease its transition from air to ground. This breakthrough protects the delicate piezoelectric actuators – energy-dense “muscles” deployed for flight that are easily fractured by external forces from rough landings and collisions.

The RoboBee’s previous iterations had suffered significant ground effect, or instability as a result of air vortices from its flapping wings. This problem was addressed by Christian Chan, a graduate student who led the mechanical redesign of the robot, and Nak-seung Patrick Hyun, a postdoctoral researcher who led controlled landing tests on a leaf and rigid surfaces.

Their paper describes improvement of the robot’s controller to adapt to ground effects as it approaches, an effort that seeks to minimize velocity before impact and dissipate energy quickly after. This innovation builds upon nature-inspired mechanical upgrades for skillful flight and graceful landing on various terrains.

The team chose the crane fly, a relatively slow-moving and harmless insect that emerges from spring to fall, as their inspiration. They noted its long, jointed appendages that likely give the insects the ability to dampen landings. This design was replicated in prototypes of different leg architectures, settling on designs similar to a crane fly’s.

The success of the RoboBee is a testament to the interface between biology and robotics. Alyssa Hernandez, a postdoctoral researcher with expertise in insect locomotion, notes that this platform can be used as a tool for biological research, producing studies that test biomechanical hypotheses.

Currently, the RoboBee stays tethered to off-board control systems, but the team will continue to focus on scaling up the vehicle and incorporating onboard electronics to give the robot sensor, power, and control autonomy. This three-pronged holy grail would allow the RoboBee platform to truly take off, paving the way for future applications in environmental monitoring, disaster surveillance, and even artificial pollination.

Continue Reading

Artificial Intelligence

Clear Navigation: Explainable AI for Ship Safety Raises Trust and Decreases Human Error

A team has developed an explainable AI model for automatic collision avoidance between ships.

Avatar photo

Published

on

The sinking of the Titanic 113 years ago was a tragic reminder of the importance of accurate navigation. The ship’s encounter with an iceberg led to one of the most infamous maritime disasters in history, and human error likely played a significant role. Today, autonomous systems powered by artificial intelligence (AI) are being developed to help ships avoid such accidents. However, for these systems to be widely adopted, it is crucial that they can provide transparent explanations for their actions.

Researchers from Osaka Metropolitan University’s Graduate School of Engineering have made a significant breakthrough in this area. They have created an explainable AI model specifically designed for ship navigation, which quantifies the collision risk for all vessels in a given area. This feature is particularly important as key sea-lanes have become increasingly congested, making it more challenging to ensure safe passage.

The researchers, Graduate Student Hitoshi Yoshioka and Professor Hirotada Hashimoto, aimed to develop an AI model that not only makes informed decisions but also provides clear explanations for its actions. By using numerical values to express the collision risk, the system can communicate its reasoning to the captain, enabling them to make more informed decisions.

According to Professor Hashimoto, “By being able to explain the basis for the judgments and behavioral intentions of AI-based autonomous ship navigation, I think we can earn the trust of maritime workers. I also believe that this research can contribute to the realization of unmanned ships.”

The findings of this study have been published in Applied Ocean Research, highlighting the potential for explainable AI to improve ship safety and reduce human error. As the maritime industry continues to evolve, the development of transparent and trustworthy autonomous systems will be essential for ensuring safe and efficient navigation.

Continue Reading

Trending