Connect with us

Artificial Intelligence

Unlocking Digital Carpentry for Everyone

Many products in the modern world are in some way fabricated using computer numerical control (CNC) machines, which use computers to automate machine operations in manufacturing. While simple in concept, the ways to instruct these machines is in reality often complex. A team of researchers has devised a system to demonstrate how to mitigate some of this complexity.

Avatar photo

Published

on

The world of digital carpentry has long been dominated by complex computer numerical control (CNC) machines, which use computers to automate manufacturing processes. However, a team of researchers from the University of Tokyo has developed a revolutionary system called Draw2Cut that makes it possible for anyone to create intricate designs and objects without prior knowledge of CNC machines or their typical workflows.

Draw2Cut allows users to draw desired designs directly onto material to be cut or milled using standard marker pens. The colors used in these drawings instruct the system on how to mill and cut the design into wood, making it a highly accessible mode of manufacture. This novel approach has been inspired by the way carpenters mark wood for cutting, making it possible for people without extensive experience to create complex designs.

The key to Draw2Cut lies in its unique drawing language, where colors and symbols are assigned specific meanings to produce unambiguous machine instructions. Purple lines mark the general shape of a path to mill, while red and green marks and lines provide instructions to cut straight down into the material or produce gradients. This intuitive workflow makes it possible for users to create complex designs without prior knowledge of CNC machines.

While Draw2Cut is not yet capable of producing items of professional quality, its main aim is to open up this mode of manufacture to more people, making it a valuable tool for hobbyists and professionals alike. The system has been tested with wood, but can also work on other materials such as metal, depending on the capabilities of the CNC machine.

The developers of Draw2Cut have made their source code open-source, allowing developers with different needs to customize it accordingly. This means that users can tailor the color language and stroke patterns to suit their specific requirements, making it an even more versatile tool for digital fabrication.

Overall, Draw2Cut represents a significant breakthrough in the field of digital carpentry, making it possible for anyone to create complex designs and objects without extensive experience or knowledge of CNC machines. Its potential impact on the world of manufacturing is vast, and its intuitive workflow and unique drawing language make it an invaluable tool for hobbyists and professionals alike.

Artificial Intelligence

Shedding Light on Shadow Branches: Revolutionizing Computing Efficiency in Modern Data Centers

Researchers have developed a new technique called ‘Skia’ to help computer processors better predict future instructions and improve computing performance.

Avatar photo

Published

on

By

The collaboration between trailblazing engineers and industry professionals has led to a groundbreaking technique called Skia, which may transform the future of computing efficiency for modern data centers.

In data centers, large computers process massive amounts of data, but often struggle to keep up due to taxing workloads. This results in slower performance, causing search engines to generate answers more slowly or not at all. To address this issue, researchers at Texas A&M University have developed Skia in collaboration with Intel, AheadComputing, and Princeton.

The team includes Dr. Paul V. Gratz, a professor in the Department of Electrical and Computer Engineering, Dr. Daniel A. Jiménez, a professor in the Department of Computer Science and Engineering, and Chrysanthos Pepi, a graduate student in the Department of Electrical and Computer Engineering.

Processing instructions has become a major bottleneck in modern processor design,” Gratz said. “We developed Skia to better predict what’s coming next and alleviate that bottleneck.” Skia can not only help better predict future instructions but also improve the throughput of instructions on the system, leading to quicker performance and less power consumption for the data center.

Think of throughput in terms of being a server in a restaurant,” Gratz said. “You have lots and lots of jobs to do. How many tasks can you complete or how many instructions can you execute per unit time? You want high throughput, especially for computing.”

Improving throughput can lead to quicker performance and less power consumption for the data center. In fact, making it up to 10% more efficient means a company previously needing to make 100 data centers around the country now only needs to make 90, which is 10 fewer data centers. That’s pretty significant. These data centers cost millions of dollars, and they consume roughly the equivalent of the entire output of a power plant.

Skia identifies and decodes these shadow branches in unused bytes, storing them in a memory area called the Shadow Branch Buffer, which can be accessed alongside the BTB. What makes this technique interesting is that most of the future instructions were already available, and we demonstrate that Skia, with a minimal hardware budget, can make data centers more efficient, nearly twice the performance improvement versus adding the same amount of storage to the existing hardware as we observe,” Pepi said.

Their findings, “Skia: Exposing Shadow Branches,” were published in one of the leading computer architecture conferences, the ACM International Conference on Architectural Support for Programming Languages and Operating Systems. The team also traveled to the Netherlands to present their work to colleagues from around the globe.

Funding for this research is administered by the Texas A&M Engineering Experiment Station (TEES), the official research agency for Texas A&M Engineering.

Continue Reading

Artificial Intelligence

Estimating Biological Age with AI: A New Frontier in Cancer Care and Beyond

Researchers developed FaceAge, an AI tool that calculate’s a patient biological age from a photo of their face. In a new study, the researchers tied FaceAge results to health outcomes in people with cancer: When FaceAge estimated a younger age than a cancer patient’s chronological age, the patient did significantly better after cancer treatment, whereas patients with older FaceAge estimates had worse survival outcomes.

Avatar photo

Published

on

Estimating Biological Age with AI: A New Frontier in Cancer Care and Beyond

Imagine having a tool that can accurately predict a person’s biological age based on their facial appearance. This might seem like science fiction, but a team of investigators at Mass General Brigham has made this concept a reality using artificial intelligence (AI) technology called FaceAge.

FaceAge uses deep learning algorithms to analyze photographs of individuals and estimate their biological age. The tool was trained on 58,851 photos of presumed healthy individuals from public datasets and tested in a cohort of 6,196 cancer patients from two centers, who had photographs taken at the start of radiotherapy treatment.

The results were striking: patients with cancer appeared significantly older than those without cancer, with an average FaceAge that was about five years older than their chronological age. Moreover, older FaceAge predictions were associated with worse overall survival outcomes across multiple cancer types.

This technology has significant implications for cancer care. By using FaceAge to inform clinical decision-making, healthcare providers may be able to tailor treatment plans more effectively to individual patients based on their estimated biological age and health status.

The researchers also found that FaceAge outperformed clinicians in predicting short-term life expectancies of patients receiving palliative radiotherapy. This highlights the potential for AI-driven tools like FaceAge to provide valuable insights in high-pressure situations, where accurate predictions are crucial.

While further research is needed before this technology can be used in real-world clinical settings, the possibilities are vast and exciting. By expanding its capabilities to predict diseases, general health status, and lifespan, FaceAge has the potential to revolutionize our approach to healthcare.

The co-senior authors of the study, Hugo Aerts, PhD, and Ray Mak, MD, emphasize that this technology opens up new avenues for biomarker discovery from photographs, with implications far beyond cancer care or predicting age. As we increasingly consider chronic diseases as diseases of aging, accurately predicting an individual’s aging trajectory becomes crucial for developing effective treatment plans.

The door to a whole new realm of biomedical innovation has been opened, and it is essential that this technology be developed within a strong regulatory and ethical framework to ensure its safe use in various applications, ultimately helping to save lives.

Continue Reading

Artificial Intelligence

Ping Pong Robot Aces High-Speed Precision Shots

Engineers developed a ping-pong-playing robot that quickly estimates the speed and trajectory of an incoming ball and precisely hits it to a desired location on the table.

Avatar photo

Published

on

By

The MIT engineers’ latest creation is a powerful and lightweight ping pong bot that returns shots with high-speed precision. This table tennis tech has come a long way since the 1980s, when researchers first started building robots to play ping pong. The problem requires a unique combination of technologies, including high-speed machine vision, fast and nimble motors and actuators, precise manipulator control, and accurate real-time prediction.

The team’s new design comprises a multijointed robotic arm that is fixed to one end of a standard ping pong table and wields a standard ping pong paddle. Aided by several high-speed cameras and a high-bandwidth predictive control system, the robot quickly estimates the speed and trajectory of an incoming ball and executes one of several swing types – loop, drive, or chop – to precisely hit the ball to a desired location on the table with various types of spin.

In tests, the engineers threw 150 balls at the robot, one after the other, from across the ping pong table. The bot successfully returned the balls with a hit rate of about 88 percent across all three swing types. The robot’s strike speed approaches the top return speeds of human players and is faster than that of other robotic table tennis designs.

The researchers have since tuned the robot’s reaction time and found the arm hits balls faster than existing systems, at velocities of 20 meters per second. Advanced human players have been known to return balls at speeds of between 21 to 25 meters per second.

“Some of the goal of this project is to say we can reach the same level of athleticism that people have,” Nguyen says. “And in terms of strike speed, we’re getting really, really close.”

The team’s design has several implications for robotics and AI research. It could be adapted to improve the speed and responsiveness of humanoid robots, particularly for search-and-rescue scenarios, or situations where a robot would need to quickly react or anticipate.

This technology also has potential applications in smart robotic training systems. A robot like this could mimic the maneuvers that an opponent would do in a game environment, in a way that helps humans play and improve.

The researchers plan to further develop their system, enabling it to cover more of the table and return a wider variety of shots. This research is supported, in part, by the Robotics and AI Institute.

Continue Reading

Trending