Connect with us

Computer Modeling

Unveiling New Disturbances in Hypersonic Flows

At hypersonic speeds, complexities occur when the gases interact with the surface of the vehicle such as boundary layers and shock waves. Researchers were able to observe new disturbances in simulations conducted for the first time in 3D.

Avatar photo

Published

on

As researchers delve into the complex world of hypersonic speeds, they encounter unexpected phenomena when gases interact with the surface of vehicles. A team from the University of Illinois Urbana-Champaign’s Department of Aerospace Engineering has made a groundbreaking discovery by conducting fully 3D simulations for the first time. This achievement was possible due to access to Frontera, the National Science Foundation-funded leadership-class computer system, and software developed by previous graduate students.

The researchers, Deborah Levin and Irmak Taylan Karpuzcu, observed new disturbances in the flow around a cone-shaped model at hypersonic speeds. Normally, one would expect concentric ribbons of gas around the cone, but they noticed breaks in the flow within shock layers both in single and double cone shapes. These breaks were particularly noticeable near the tip of the cone, where air molecules are closer together, making them more viscous.

The team’s findings indicate that as the Mach number increases, the shock wave gets closer to the surface, promoting these instabilities. However, when they ran simulations at lower speeds (Mach 6), they did not see the break in the flow. This suggests that the cone geometry, which represents a simplified version of many hypersonic vehicles, plays a crucial role in understanding how the flow affects surface properties.

The researchers’ software allowed them to run the simulation efficiently on parallel processors, making it much faster than previous methods. They were able to compare their results with data from experiments under high-speed conditions and found breaks that they didn’t expect to see. The most challenging part of the work was analyzing why these breaks in the flow were happening.

The team developed a code based on triple-deck theory to numerically simulate the problem again. Running the 3D direct simulation Monte Carlo simulation is hard, but when they set up a second computer program to make sure everything works and is within the limits for their flow conditions, they saw the break in two big chunks in 180-degree periodicity around the cone.

The beauty of the direct simulation Monte Carlo lies in its ability to track each air molecule in the flow and capture shocks. This method introduces randomness and repetition to calculate fluid dynamics, making it more extensive than classical computational fluid dynamics methods.

This research has significant implications for designing hypersonic vehicles, as understanding how the flow affects surface properties can lead to better design considerations. The team’s findings also demonstrate the importance of conducting fully 3D simulations in researching complex phenomena at high speeds.

Artificial Intelligence

Clear Navigation: Explainable AI for Ship Safety Raises Trust and Decreases Human Error

A team has developed an explainable AI model for automatic collision avoidance between ships.

Avatar photo

Published

on

The sinking of the Titanic 113 years ago was a tragic reminder of the importance of accurate navigation. The ship’s encounter with an iceberg led to one of the most infamous maritime disasters in history, and human error likely played a significant role. Today, autonomous systems powered by artificial intelligence (AI) are being developed to help ships avoid such accidents. However, for these systems to be widely adopted, it is crucial that they can provide transparent explanations for their actions.

Researchers from Osaka Metropolitan University’s Graduate School of Engineering have made a significant breakthrough in this area. They have created an explainable AI model specifically designed for ship navigation, which quantifies the collision risk for all vessels in a given area. This feature is particularly important as key sea-lanes have become increasingly congested, making it more challenging to ensure safe passage.

The researchers, Graduate Student Hitoshi Yoshioka and Professor Hirotada Hashimoto, aimed to develop an AI model that not only makes informed decisions but also provides clear explanations for its actions. By using numerical values to express the collision risk, the system can communicate its reasoning to the captain, enabling them to make more informed decisions.

According to Professor Hashimoto, “By being able to explain the basis for the judgments and behavioral intentions of AI-based autonomous ship navigation, I think we can earn the trust of maritime workers. I also believe that this research can contribute to the realization of unmanned ships.”

The findings of this study have been published in Applied Ocean Research, highlighting the potential for explainable AI to improve ship safety and reduce human error. As the maritime industry continues to evolve, the development of transparent and trustworthy autonomous systems will be essential for ensuring safe and efficient navigation.

Continue Reading

Computer Graphics

3D Streaming Gets Leaner: Predicting Visible Content for Immersive Experiences

A new approach to streaming technology may significantly improve how users experience virtual reality and augmented reality environments, according to a new study. The research describes a method for directly predicting visible content in immersive 3D environments, potentially reducing bandwidth requirements by up to 7-fold while maintaining visual quality.

Avatar photo

Published

on

A groundbreaking new approach to 3D streaming has emerged, poised to revolutionize how users experience virtual reality (VR) and augmented reality (AR) environments. Researchers at NYU Tandon School of Engineering have developed a method for directly predicting visible content in immersive 3D environments, potentially reducing bandwidth requirements by up to 7-fold while maintaining visual quality.

This innovative technology addresses the fundamental challenge of streaming immersive content: the massive amount of data required to render high-quality 3D experiences. Traditional video streaming sends everything within a frame, but this new approach is more like having your eyes follow you around a room – it only processes what you’re actually looking at.

The system works by dividing 3D space into “cells” and treating each cell as a node in a graph network. It uses transformer-based graph neural networks to capture spatial relationships between neighboring cells, and recurrent neural networks to analyze how visibility patterns evolve over time. This approach reduces error accumulation and improves prediction accuracy, allowing the system to predict what will be visible for a user 2-5 seconds ahead – a significant improvement over previous systems that could only accurately predict a user’s field of view (FoV) a fraction of a second ahead.

The research team’s approach has been applied in an ongoing project to bring point cloud video to dance education, making 3D dance instruction streamable on standard devices with lower bandwidth requirements. This technology has the potential to transform the way people experience immersive content, enabling more responsive AR/VR experiences with reduced data usage and allowing developers to create more complex environments without requiring ultra-fast internet connections.

“We’re seeing a transition where AR/VR is moving from specialized applications to consumer entertainment and everyday productivity tools,” said Yong Liu, professor in the Electrical and Computer Engineering Department at NYU Tandon. “Bandwidth has been a constraint. This research helps address that limitation.”

Continue Reading

Brain Injury

Unlocking the Secrets of the Brain with Digital Twins

In a new study, researchers created an AI model of the mouse visual cortex that predicts neuronal responses to visual images.

Avatar photo

Published

on

The researchers used a combination of AI and neuroscience techniques to create the digital twin. They first recorded the brain activity of real mice as they watched movies made for people, which provided a realistic representation of what the mice might see in natural settings. The films were action-packed and had a lot of movement, which strongly activated the visual system of the mice.

The researchers then used this data to train a core model, which could be customized into a digital twin of any individual mouse with additional training. These digital twins were able to closely simulate the neural activity of their biological counterparts in response to a variety of new visual stimuli, including videos and static images.

The large quantity of aggregated training data was key to the success of the digital twins, allowing them to make accurate predictions about the brain’s response to new situations. The researchers verified these predictions against high-resolution imaging of the mouse visual cortex, which provided unprecedented detail.

This technology has significant implications for the field of neuroscience. By creating a digital twin of the mouse brain, scientists can perform experiments on a realistic simulation of the brain, allowing them to gain insights into how the brain processes information and the principles of intelligence.

The researchers plan to extend their modeling into other brain areas and to animals, including primates, with more advanced cognitive capabilities. This could ultimately lead to the creation of digital twins of at least parts of the human brain, which would be a major breakthrough in the field of neuroscience.

Content:

Unlocking the Secrets of the Brain with Digital Twins

A group of researchers have created a digital twin of the mouse brain, which can predict the response of tens of thousands of neurons to new videos and images. This AI model has been trained on large datasets of brain activity collected from real mice watching movie clips.

The digital twin is an example of a foundation model, capable of learning from large datasets and applying that knowledge to new tasks and new types of data. This technology has the potential to revolutionize the field of neuroscience, allowing scientists to perform experiments on a realistic simulation of the mouse brain and gaining insights into how the brain processes information.

The researchers used a combination of AI and neuroscience techniques to create the digital twin. They first recorded the brain activity of real mice as they watched movies made for people, which provided a realistic representation of what the mice might see in natural settings. The films were action-packed and had a lot of movement, which strongly activated the visual system of the mice.

The researchers then used this data to train a core model, which could be customized into a digital twin of any individual mouse with additional training. These digital twins were able to closely simulate the neural activity of their biological counterparts in response to a variety of new visual stimuli, including videos and static images.

The large quantity of aggregated training data was key to the success of the digital twins, allowing them to make accurate predictions about the brain’s response to new situations. The researchers verified these predictions against high-resolution imaging of the mouse visual cortex, which provided unprecedented detail.

This technology has significant implications for the field of neuroscience. By creating a digital twin of the mouse brain, scientists can perform experiments on a realistic simulation of the brain, allowing them to gain insights into how the brain processes information and the principles of intelligence.

The researchers plan to extend their modeling into other brain areas and to animals, including primates, with more advanced cognitive capabilities. This could ultimately lead to the creation of digital twins of at least parts of the human brain, which would be a major breakthrough in the field of neuroscience.

Funding:

The study received funding from the Intelligence Advanced Research Projects Activity, a National Science Foundation NeuroNex grant, the National Institute of Mental Health, the National Institute of Neurological Disorders and Stroke (grant U19MH114830), the National Eye Institute (grant R01 EY026927 and Core Grant for Vision Research T32-EY-002520-37), the European Research Council and the Deutsche Forschungsgemeinschaft.

Continue Reading

Trending