Connect with us
We’re experimenting with AI-generated content to help deliver information faster and more efficiently.
While we try to keep things accurate, this content is part of an ongoing experiment and may not always be reliable.
Please double-check important details — we’re not responsible for how the information is used.

Computational Biology

“Dig Once” Approach to Upgrading Electrical and Broadband Infrastructure: A Cost-Effective Solution for Massachusetts Towns

When it comes to upgrading electrical and broadband infrastructure, new research shows that a ‘dig once’ approach is nearly 40% more cost effective than replacing them separately. The study also found that the greatest benefit comes from proactively undergrounding lines that are currently above ground, even if lines haven’t reached the end of their usefulness.

Avatar photo

Published

on

The article discusses new research from the University of Massachusetts Amherst that shows upgrading electrical and broadband infrastructure using a “dig once” approach is nearly 40% more cost-effective than replacing them separately. The study found that co-undergrounding – burying both electric and broadband internet lines together – saves costs, making it feasible for smaller towns in Massachusetts to make undergrounding upgrades.

Using computational modeling across various infrastructure upgrade scenarios, the researchers found that co-undergrounding is 39% more cost-effective than separately burying electrical and broadband wires. They also explored how aggressively towns should pivot to putting lines underground, considering factors such as the cost of converting lines from above ground to underground, the cost of outages, and the hours of outages that can be avoided if lines are underground.

A case study in Shrewsbury, Massachusetts, found that an aggressive co-undergrounding strategy over 40 years would cost $45.4 million but save $55.1 million from avoiding outages, considering factors like spoiled food, damaged home appliances, missed remote work hours, and increased use of backup power sources.

The researchers also took into account additional benefits such as increased property values from the aesthetic improvement of eliminating overhead lines, resulting in a net benefit of $11.3 million. They concluded that aggressively converting just electrical wires to underground was less expensive but had a significantly lower net benefit than co-undergrounding.

Future research directions include quantifying the impacts of co-undergrounding across various geographic locations and scenarios, investigating alternative underground routing options, and other potential outage mitigation strategies.

The study’s findings aim to help decision makers prioritize strategic planning for infrastructure upgrades, considering factors like soil composition, network type, and land use variables. The ultimate goal is to encourage utilities and towns to think strategically about upgrading electrical and broadband infrastructure using a “dig once” approach.

Artificial Intelligence

Transistors Get a Boost: Scientists Develop New, More Efficient Material

Shrinking silicon transistors have reached their physical limits, but a team from the University of Tokyo is rewriting the rules. They’ve created a cutting-edge transistor using gallium-doped indium oxide with a novel “gate-all-around” structure. By precisely engineering the material’s atomic structure, the new device achieves remarkable electron mobility and stability. This breakthrough could fuel faster, more reliable electronics powering future technologies from AI to big data systems.

Avatar photo

Published

on

By

Scientists have long considered transistors to be one of the greatest inventions of the 20th century. These tiny components are the backbone of modern electronics, allowing us to amplify or switch electrical signals. However, as electronics continue to shrink, it’s become increasingly difficult to scale down silicon-based transistors. It seemed like we had hit a wall.

A team of researchers from The University of Tokyo has come up with an innovative solution. They’ve developed a new transistor made from gallium-doped indium oxide (InGaOx), a material that can be structured as a crystalline oxide. This orderly structure is well-suited for electron mobility, making it an ideal candidate for replacing traditional silicon-based transistors.

The researchers wanted to enhance efficiency and scalability, so they designed their transistor with a “gate-all-around” structure. In this design, the gate (which turns the current on or off) surrounds the channel where the current flows. This wraps the gate entirely around the channel, improving efficiency and allowing for further miniaturization.

To create this new transistor, the team used atomic-layer deposition to coat the channel region with a thin film of InGaOx, one atomic layer at a time. They then heated the film to transform it into the crystalline structure needed for electron mobility.

The results are promising: their gate-all-around MOSFET achieves high mobility of 44.5 cm2/Vs and operates stably under applied stress for nearly three hours. In fact, this new transistor outperforms similar devices that have previously been reported.

This breakthrough has the potential to revolutionize electronics by providing more reliable and efficient components suited for applications with high computational demand, such as big data and artificial intelligence. These tiny transistors promise to help next-gen technology run smoothly, making a significant difference in our everyday lives.

Continue Reading

Biochemistry Research

“Unlocking Nature’s Math: Uncovering Gauge Freedoms in Biological Models”

Scientists have developed a unified theory for mathematical parameters known as gauge freedoms. Their new formulas will allow researchers to interpret research results much faster and with greater confidence. The development could prove fundamental for future efforts in agriculture, drug discovery, and beyond.

Avatar photo

Published

on

In the intricate language of mathematics, there lies a fascinating phenomenon known as gauge freedoms. This seemingly abstract concept may seem far removed from our everyday lives, but its impact is felt deeply in the realm of biological sciences. Researchers at Cold Spring Harbor Laboratory (CSHL) have made groundbreaking strides in understanding and harnessing this power.

Gauge freedoms are essentially the mathematical equivalent of having multiple ways to describe a single truth. In science, when modeling complex systems like DNA or protein sequences, different parameters can result in identical predictions. This phenomenon is crucial in fields like electromagnetism and quantum mechanics. However, until now, computational biologists have had to employ various ad hoc methods to account for gauge freedoms, rather than tackling them directly.

CSHL’s Associate Professor Justin Kinney, along with colleague David McCandlish, led a team that aimed to change this. They developed a unified theory for handling gauge freedoms in biological models. This breakthrough could revolutionize applications across multiple fields, from plant breeding to drug development.

Gauge freedoms are ubiquitous in computational biology, says Prof. Kinney. “Historically, they’ve been dealt with as annoying technicalities.” However, through their research, the team has shown that understanding and systematically addressing these freedoms can lead to more accurate and faster analysis of complex genetic datasets.

Their new mathematical theory provides efficient formulas for a wide range of biological applications. These formulas will empower scientists to interpret research results with greater confidence and speed. Furthermore, the researchers have published a companion paper revealing where gauge freedoms originate – in symmetries present within real biological sequences.

As Prof. McCandlish notes, “We prove that gauge freedoms are necessary to interpret the contributions of particular genetic sequences.” This finding underscores the significance of understanding gauge freedoms not just as a theoretical concept but also as a fundamental requirement for advancing future research in agriculture, drug discovery, and beyond.

This rewritten article aims to clarify complex scientific concepts for a broader audience while maintaining the original message’s integrity.

Continue Reading

Breast Cancer

‘Fast-fail’ AI Blood Test Revolutionizes Pancreatic Cancer Treatment Monitoring

An artificial intelligence technique for detecting DNA fragments shed by tumors and circulating in a patient’s blood could help clinicians more quickly identify and determine if pancreatic cancer therapies are working.

Avatar photo

Published

on

The Johns Hopkins Kimmel Cancer Center has made groundbreaking advancements in developing an artificial intelligence (AI) technique for detecting DNA fragments shed by tumors and circulating in a patient’s blood, potentially leading to better treatment outcomes for patients with pancreatic cancer. The innovative method, called ARTEMIS-DELFI, can identify therapeutic responses and was found to be more accurate than existing clinical and molecular markers.

The study, published in Science Advances, involved testing the ARTEMIS-DELFI approach in blood samples from two large clinical trials of pancreatic cancer treatments. Researchers discovered that this AI-driven technique could predict patient outcomes better than imaging or other current methods just two months after treatment initiation. The team found that the simpler and potentially more broadly applicable ARTEMIS-DELFI was a superior test compared to another method, called WGMAF.

Victor E. Velculescu, M.D., Ph.D., senior study author and co-director of the cancer genetics and epigenetics program at the cancer center, emphasized that time is crucial when treating patients with pancreatic cancer. Many patients receive diagnoses at a late stage, when cancer may progress rapidly. The team wants to know as quickly as possible whether a therapy is working or not, enabling them to switch to another treatment if it’s not effective.

Currently, clinicians use imaging tools like CT scans and MRIs to monitor cancer treatment response and tumor progression. However, these methods can produce results that are less accurate for patients receiving immunotherapies. In the study, Velculescu and his colleagues tested two alternate approaches to monitoring treatment response in patients participating in the phase 2 CheckPAC trial of immunotherapy for pancreatic cancer.

One approach analyzed DNA from tumor biopsies as well as cell-free DNA in blood samples to detect a treatment response, called WGMAF. The other used machine learning to scan millions of cell-free DNA fragments only in the patient’s blood samples, resulting in the ARTEMIS-DELFI method. Both approaches were able to detect which patients were benefiting from the therapies.

However, not all patients had tumor samples, and many patients’ tumor samples contained a small fraction of cancer cells compared to normal pancreatic and other cells, confounding the WGMAF test. The team validated that ARTEMIS-DELFI was an effective treatment response monitoring tool in a second clinical trial called the PACTO trial.

The study confirmed that ARTEMIS-DELFI can identify therapeutic responses in patients with pancreatic cancer just two months after treatment initiation, providing valuable insights for clinicians to make informed decisions. The team hopes that this innovative AI-driven technique will revolutionize the way we monitor and treat pancreatic cancer, ultimately improving patient outcomes.

Continue Reading

Trending