Connect with us
We’re experimenting with AI-generated content to help deliver information faster and more efficiently.
While we try to keep things accurate, this content is part of an ongoing experiment and may not always be reliable.
Please double-check important details — we’re not responsible for how the information is used.

Artificial Intelligence

The Quantum Drumhead Revolution: A Breakthrough in Signal Transmission with Near-Perfect Efficiency

Researchers have developed an ultra-thin drumhead-like membrane that lets sound signals, or phonons, travel through it with astonishingly low loss, better than even electronic circuits. These near-lossless vibrations open the door to new ways of transferring information in systems like quantum computers or ultra-sensitive biological sensors.

Avatar photo

Published

on

The Niels Bohr Institute at the University of Copenhagen has made a groundbreaking discovery that could revolutionize the way we transmit information. Researchers, in collaboration with the University of Konstanz and ETH Zurich, have successfully sent vibrations through an ultra-thin drumhead, measuring only 10 mm wide, with astonishingly low loss – just one phonon out of a million. This achievement is even more impressive than electronic circuit signal handling.

The drumhead, perforated with many triangular holes, utilizes the concept of phonons to transmit signals. Phonons are essentially sound waves that travel through solid materials by vibrating atoms and pushing each other. This phenomenon is not unlike encoding a message and sending it through a material, where signal loss can occur due to various factors like heat or incorrect vibrations.

The researchers’ success lies in achieving almost lossless transmission of signals through the membrane. The reliability of this platform for sending information is incredibly high, making it a promising candidate for future applications. To measure the loss, researchers directed the signal through the material and around the holes, observing that the amplitude decreased by only about one phonon out of a million.

This achievement has significant implications for quantum research. Building a quantum computer requires super-precise transfer of signals between its different parts. The development of sensors capable of measuring the smallest biological fluctuations in our own body also relies heavily on signal transfer. As Assistant Professor Xiang Xi and Professor Albert Schliesser explain, their current focus is on exploring further possibilities with this method.

“We want to experiment with more complex structures and see how phonons move around them or collide like cars at an intersection,” says Albert Schliesser. “This will give us a better understanding of what’s ultimately possible and what new applications there are.” The pursuit of basic research is about producing new knowledge, and this discovery is a testament to the power of scientific inquiry.

In conclusion, the quantum drumhead revolution has brought us one step closer to achieving near-perfect signal transmission. As researchers continue to explore the possibilities of this method, we can expect exciting breakthroughs in various fields, ultimately leading to innovative applications that will transform our understanding of the world.

Artificial Intelligence

Scientists Uncover the Secret to AI’s Language Understanding: A Phase Transition in Neural Networks

Neural networks first treat sentences like puzzles solved by word order, but once they read enough, a tipping point sends them diving into word meaning instead—an abrupt “phase transition” reminiscent of water flashing into steam. By revealing this hidden switch, researchers open a window into how transformer models such as ChatGPT grow smarter and hint at new ways to make them leaner, safer, and more predictable.

Avatar photo

Published

on

By

The ability of artificial intelligence systems to engage in natural conversations is a remarkable feat. However, despite this progress, the internal processes that lead to such results remain largely unknown. A recent study published in the Journal of Statistical Mechanics: Theory and Experiment (JSTAT) has shed light on this mystery. The research reveals that when small amounts of data are used for training, neural networks initially rely on the position of words in a sentence. However, as the system is exposed to enough data, it transitions to a new strategy based on the meaning of the words.

This transition occurs abruptly, once a critical data threshold is crossed – much like a phase transition in physical systems. The findings offer valuable insights into understanding the workings of these models. Just as a child learning to read starts by understanding sentences based on the positions of words, a neural network begins its journey by relying on word positions. However, as it continues to learn and train, the network “keeps going to school” and develops a deeper understanding of word meanings.

This shift is a critical discovery in the field of artificial intelligence. The researchers used a simplified model of self-attention mechanism – a core building block of transformer language models. These models are designed to process sequences of data, such as text, and form the backbone of many modern language systems.

The study’s lead author, Hugo Cui, explains that the network can use two strategies: one based on word positions and another on word meanings. Initially, the network relies on word positions, but once a certain threshold is crossed, it abruptly shifts to relying on meaning-based strategies. This transition is likened to a phase transition in physical systems, where the system undergoes a sudden, drastic change.

Understanding this phenomenon from a theoretical viewpoint is essential. The researchers emphasize that their findings can provide valuable insights into making neural networks more efficient and safer to use. The study’s results are published in JSTAT as part of the Machine Learning 2025 special issue and included in the proceedings of the NeurIPS 2024 conference.

The research by Cui, Behrens, Krzakala, and Zdeborová, titled “A Phase Transition between Positional and Semantic Learning in a Solvable Model of Dot-Product Attention,” offers new knowledge that can be used to improve the performance and safety of artificial intelligence systems. The study’s findings have significant implications for the development of more efficient and effective language models, ultimately leading to advancements in natural language processing and understanding.

Continue Reading

Artificial Intelligence

AI Uncovers Hidden Heart Risks in CT Scans: A Game-Changer for Cardiovascular Care

What if your old chest scans—taken years ago for something unrelated—held a secret warning about your heart? A new AI tool called AI-CAC, developed by Mass General Brigham and the VA, can now comb through routine CT scans to detect hidden signs of heart disease before symptoms strike.

Avatar photo

Published

on

The Massachusetts General Brigham researchers have developed an innovative artificial intelligence (AI) tool called AI-CAC to analyze previously collected CT scans and identify individuals with high coronary artery calcium (CAC) levels, indicating a greater risk for cardiovascular events. Their research, published in NEJM AI, demonstrated the high accuracy and predictive value of AI-CAC for future heart attacks and 10-year mortality.

Millions of chest CT scans are taken each year, often in healthy people, to screen for lung cancer or other conditions. However, this study reveals that these scans can also provide valuable information about cardiovascular risk, which has been going unnoticed. The researchers found that AI-CAC had a high accuracy rate (89.4%) at determining whether a scan contained CAC or not.

The gold standard for quantifying CAC uses “gated” CT scans, synchronized to the heartbeat to reduce motion during the scan. However, most chest CT scans obtained for routine clinical purposes are “nongated.” The researchers developed AI-CAC, a deep learning algorithm, to probe through these nongated scans and quantify CAC.

The AI-CAC model was 87.3% accurate at determining whether the score was higher or lower than 100, indicating a moderate cardiovascular risk. Importantly, AI-CAC was also predictive of 10-year all-cause mortality, with those having a CAC score over 400 having a 3.49 times higher risk of death over a 10-year period.

The researchers hope to conduct future studies in the general population and test whether the tool can assess the impact of lipid-lowering medications on CAC scores. This could lead to the implementation of AI-CAC in clinical practice, enabling physicians to engage with patients earlier, before their heart disease advances to a cardiac event.

As Dr. Raffi Hagopian, first author and cardiologist at the VA Long Beach Healthcare System, emphasized, “Using AI for tasks like CAC detection can help shift medicine from a reactive approach to the proactive prevention of disease, reducing long-term morbidity, mortality, and healthcare costs.”

Continue Reading

Artificial Intelligence

Uncovering Human Superpowers: How Our Brains Master Affordances that Elude AI

Scientists at the University of Amsterdam discovered that our brains automatically understand how we can move through different environments—whether it’s swimming in a lake or walking a path—without conscious thought. These “action possibilities,” or affordances, light up specific brain regions independently of what’s visually present. In contrast, AI models like ChatGPT still struggle with these intuitive judgments, missing the physical context that humans naturally grasp.

Avatar photo

Published

on

By

Uncovering Human Superpowers: How Our Brains Master Affordances that Elude AI

Imagine walking through a park or swimming in a lake – it’s a natural ability we take for granted. Researchers at the University of Amsterdam have shed light on how our brains process this intuitive knowledge, and the implications are fascinating. By studying brain activity while people viewed various environments, they discovered unique patterns associated with “affordances” – opportunities for action.

In essence, when we look at a scene, our brains automatically consider what we can do in it, whether it’s walking, cycling, or swimming. This is not just a psychological concept but a measurable property of our brains. The research team, led by Iris Groen, used an MRI scanner to investigate brain activity while participants viewed images of indoor and outdoor environments.

The results were striking: certain areas in the visual cortex became active in a way that couldn’t be explained by visible objects in the image. These brain areas not only represented what could be seen but also what you can do with it – even when participants weren’t given an explicit action instruction. This means that affordance processing occurs automatically, without conscious thought.

The researchers compared these human abilities with AI models, including ChatGPT, and found that they were worse at predicting possible actions. Even the best AI models didn’t give exactly the same answers as humans, despite it being a simple task for us. This highlights how our way of seeing is deeply intertwined with how we interact with the world.

The study has significant implications for the development of reliable and efficient AI. As more sectors use AI, it’s crucial that machines not only recognize what something is but also understand what it can do. For example, a robot navigating a disaster area or a self-driving car distinguishing between a bike path and a driveway.

Moreover, the research touches on the sustainable aspect of AI. Current training methods are energy-intensive and often accessible to large tech companies. By understanding how our brains work and process information efficiently, we can make AI smarter, more economical, and more human-friendly.

The discovery of affordance processing in the brain opens up new avenues for improving AI and making it more sustainable. As we continue to explore the intricacies of human cognition, we may uncover even more human superpowers that elude AI – a fascinating prospect indeed.

Continue Reading

Trending