Connect with us

Computational Biology

“Revolutionizing Sleep Analysis: New AI Model Analyzes Full Night of Sleep with High Accuracy”

Researchers have developed a powerful AI tool, built on the same transformer architecture used by large language models like ChatGPT, to process an entire night’s sleep. To date, it is one of the largest studies, analyzing 1,011,192 hours of sleep. The model, called patch foundational transformer for sleep (PFTSleep), analyzes brain waves, muscle activity, heart rate, and breathing patterns to classify sleep stages more effectively than traditional methods, streamlining sleep analysis, reducing variability, and supporting future clinical tools to detect sleep disorders and other health risks.

Avatar photo

Published

on

The world of sleep research has taken a significant leap forward with the development of a powerful new AI tool. The Icahn School of Medicine has created the patch foundational transformer for sleep (PFTSleep), an innovative model that analyzes an entire night’s sleep with high accuracy.

Unlike traditional methods, which often rely on human experts manually scoring short segments of sleep data or using AI models that can’t analyze a patient’s full night of sleep, PFTSleep takes a more comprehensive view. By training on full-length sleep data, the model can recognize sleep patterns throughout the night and across different populations and settings.

This breakthrough is made possible by leveraging thousands of sleep recordings, which the investigators used to develop the AI tool. The researchers emphasize that this new approach streamlines sleep analysis, reduces variability, and supports future clinical tools to detect sleep disorders and other health risks.

PFTSleep analyzes brain waves, muscle activity, heart rate, and breathing patterns to classify sleep stages more effectively than traditional methods. By recognizing these patterns, the model can provide a standardized and scalable method for sleep research and clinical use.

The first author of the study, Benjamin Fox, says, “This is a step forward in AI-assisted sleep analysis and interpretation.” He notes that by leveraging AI in this way, researchers can learn relevant clinical features directly from sleep study signal data and use them for sleep scoring and other clinical applications.

The potential impact of PFTSleep is vast. The model has the capacity to revolutionize sleep research by analyzing entire nights of sleep with greater consistency. This could lead to a deeper understanding of sleep health and its connection to overall well-being.

While this AI tool holds great promise, it’s essential to remember that it would not replace clinical expertise. Instead, it would serve as a powerful aid for sleep specialists, helping to speed up and standardize sleep analysis.

The researchers emphasize that their next goal is to refine the technology for clinical applications, such as identifying sleep-related health risks more efficiently. They also aim to expand PFTSleep’s capabilities beyond sleep-stage classification to detecting sleep disorders and predicting health outcomes.

Additional Note: The rewritten article maintains the core ideas of the original but with improved clarity, structure, and style. It provides a clear understanding of the new AI model, its potential impact on sleep research, and how it can aid clinical applications.

Computational Biology

“Dig Once” Approach to Upgrading Electrical and Broadband Infrastructure: A Cost-Effective Solution for Massachusetts Towns

When it comes to upgrading electrical and broadband infrastructure, new research shows that a ‘dig once’ approach is nearly 40% more cost effective than replacing them separately. The study also found that the greatest benefit comes from proactively undergrounding lines that are currently above ground, even if lines haven’t reached the end of their usefulness.

Avatar photo

Published

on

The article discusses new research from the University of Massachusetts Amherst that shows upgrading electrical and broadband infrastructure using a “dig once” approach is nearly 40% more cost-effective than replacing them separately. The study found that co-undergrounding – burying both electric and broadband internet lines together – saves costs, making it feasible for smaller towns in Massachusetts to make undergrounding upgrades.

Using computational modeling across various infrastructure upgrade scenarios, the researchers found that co-undergrounding is 39% more cost-effective than separately burying electrical and broadband wires. They also explored how aggressively towns should pivot to putting lines underground, considering factors such as the cost of converting lines from above ground to underground, the cost of outages, and the hours of outages that can be avoided if lines are underground.

A case study in Shrewsbury, Massachusetts, found that an aggressive co-undergrounding strategy over 40 years would cost $45.4 million but save $55.1 million from avoiding outages, considering factors like spoiled food, damaged home appliances, missed remote work hours, and increased use of backup power sources.

The researchers also took into account additional benefits such as increased property values from the aesthetic improvement of eliminating overhead lines, resulting in a net benefit of $11.3 million. They concluded that aggressively converting just electrical wires to underground was less expensive but had a significantly lower net benefit than co-undergrounding.

Future research directions include quantifying the impacts of co-undergrounding across various geographic locations and scenarios, investigating alternative underground routing options, and other potential outage mitigation strategies.

The study’s findings aim to help decision makers prioritize strategic planning for infrastructure upgrades, considering factors like soil composition, network type, and land use variables. The ultimate goal is to encourage utilities and towns to think strategically about upgrading electrical and broadband infrastructure using a “dig once” approach.

Continue Reading

Computational Biology

Unlocking the Code: AI-Powered Diagnosis for Drug-Resistant Infections

Scientists have developed an artificial intelligence-based method to more accurately detect antibiotic resistance in deadly bacteria such as tuberculosis and staph. The breakthrough could lead to faster and more effective treatments and help mitigate the rise of drug-resistant infections, a growing global health crisis.

Avatar photo

Published

on

The world is facing a growing health crisis – drug-resistant infections. These infections are not only harder to treat but also require more expensive and toxic medications, leading to longer hospital stays and higher mortality rates. In 2021 alone, 450,000 people developed multidrug-resistant tuberculosis (TB), with treatment success rates dropping to just 57%, according to the World Health Organization.

Tulane University scientists have developed a groundbreaking artificial intelligence-based method that more accurately detects genetic markers of antibiotic resistance in deadly bacteria like TB and staph. This innovative approach has the potential to lead to faster and more effective treatments.

The researchers introduced a new Group Association Model (GAM) that uses machine learning to identify genetic mutations tied to drug resistance. Unlike traditional tools, which can mistakenly link unrelated mutations to resistance, GAM doesn’t rely on prior knowledge of resistance mechanisms, making it more flexible and able to find previously unknown genetic changes.

Current methods of detecting resistance take too long or miss rare mutations. Tulane’s model addresses both problems by analyzing whole genome sequences and comparing groups of bacterial strains with different resistance patterns to find genetic changes that reliably indicate resistance to specific drugs. This is like using the bacteria’s entire genetic fingerprint to uncover what makes it immune to certain antibiotics.

In the study, the researchers applied GAM to over 7,000 strains of Mtb and nearly 4,000 strains of S. aureus, identifying key mutations linked to resistance. They found that GAM not only matched or exceeded the accuracy of the WHO’s resistance database but also drastically reduced false positives, wrongly identified markers of resistance which can lead to inappropriate treatment.

The model’s ability to detect resistance without needing expert-defined rules means it could potentially be applied to other bacteria or even in agriculture, where antibiotic resistance is also a concern in crops. This tool can help us stay ahead of ever-evolving drug-resistant infections and provide a clearer picture of which mutations actually cause resistance, reducing misdiagnoses and unnecessary changes to treatment.

When combined with machine learning, the ability to predict resistance with limited or incomplete data improved. In validation studies using clinical samples from China, the machine-learning enhanced model outperformed WHO-based methods in predicting resistance to key front-line antibiotics. Catching resistance early can help doctors tailor the right treatment regimen before the infection spreads or worsens.

It’s vital that we stay ahead of ever-evolving drug-resistant infections. This AI-powered diagnosis tool has the potential to revolutionize the way we detect and treat these deadly bacteria, leading to better patient outcomes and improved global health.

Continue Reading

Computational Biology

Riding the AI Wave toward Rapid, Precise Ocean Simulations

Scientists have developed an AI-powered fluid simulation model that significantly reduces computation time while maintaining accuracy. Their approach could aid offshore power generation, ship design and ocean monitoring.

Avatar photo

Published

on

Riding the crest of technological advancements, researchers at Osaka Metropolitan University have developed a groundbreaking machine learning-powered fluid simulation model that significantly reduces computation time without compromising accuracy. This innovative technique opens doors to potential applications in offshore power generation, ship design, and real-time ocean monitoring.

Predicting fluid behavior with precision is crucial for industries relying on wave and tidal energy, as well as for designing maritime structures and vessels. Traditional particle methods are commonly used but require extensive computational resources. The new AI-powered surrogate model uses graph neural networks to simplify and accelerate fluid simulations, making waves in fluid dynamics research.

However, researchers acknowledge that AI is not without its limitations. “AI can deliver exceptional results for specific problems but often struggles when applied to different conditions,” said Takefumi Higaki, an assistant professor at Osaka Metropolitan University’s Graduate School of Engineering and lead author of the study.

The team aimed to create a tool that is consistently fast and accurate. They first compared different training conditions to determine what factors were essential for high-precision fluid calculations. Then, they systematically evaluated how well their model adapted to different simulation speeds and various types of fluid movements.

The results demonstrated strong generalization capabilities across different fluid behaviors. “Our model maintains the same level of accuracy as traditional particle-based simulations, throughout various fluid scenarios, while reducing computation time from approximately 45 minutes to just three minutes,” Higaki said.

This research marks a significant step forward in high-performance fluid simulation, offering a scalable and generalizable solution that balances accuracy with efficiency. Such improvements extend beyond the lab, enabling faster and more precise fluid simulations that can accelerate the design process for ships and offshore energy systems.

The study was published in Applied Ocean Research, highlighting the potential of AI to revolutionize ocean simulations and unlock new possibilities for industries reliant on wave and tidal energy.

Continue Reading

Trending