Connect with us
We’re experimenting with AI-generated content to help deliver information faster and more efficiently.
While we try to keep things accurate, this content is part of an ongoing experiment and may not always be reliable.
Please double-check important details — we’re not responsible for how the information is used.

Computer Programming

The Limits of Precision: How AI Can Help Us Reach the Edge of What Physics Allows

Scientists have uncovered how close we can get to perfect optical precision using AI, despite the physical limitations imposed by light itself. By combining physics theory with neural networks trained on distorted light patterns, they showed it’s possible to estimate object positions with nearly the highest accuracy allowed by nature. This breakthrough opens exciting new doors for applications in medical imaging, quantum tech, and materials science.

Avatar photo

Published

on

The concept of precision has been a cornerstone in physics for centuries. For 150 years, it has been known that no matter how advanced our technology becomes, there are fundamental limits to the precision with which we can measure physical phenomena. The position of a particle, for instance, can never be measured with infinite precision; a certain amount of blurring is unavoidable.

Recently, an international team of researchers from TU Wien (Vienna), the University of Glasgow, and the University of Grenoble posed a question: where is the absolute limit of precision that is possible with optical methods? And how can this limit be approached as closely as possible? The team’s findings have significant implications for a wide range of fields, including medicine.

To address this question, the researchers employed a theoretical measure known as Fisher information. This measure describes how much information an optical signal contains about an unknown parameter – such as the object position. By using Fisher information, the team was able to calculate an upper limit for the theoretically achievable precision in different experimental scenarios.

However, the calculation of this limit does not necessarily mean that it is impossible to achieve. In fact, a corresponding experiment designed by Dorian Bouchet from the University of Grenoble, together with Ilya Starshynov and Daniele Faccio from the University of Glasgow, showed that using artificial intelligence (AI) algorithms for neural networks can come very close to this limit.

In the experiment, a laser beam was directed at a small, reflective object located behind a turbid liquid. The measurement conditions varied depending on the turbidity – and therefore also the difficulty of obtaining precise position information from the signal. The recorded images only showed highly distorted light patterns that looked like random patterns to the human eye.

But when fed into a neural network, which was trained with many such images each with a known object position, the network could learn which patterns are associated with which positions. After sufficient training, the network was able to determine the object position very precisely, even with new, unknown patterns.

The precision of the prediction achieved by the AI-supported algorithm was only minimally worse than the theoretically achievable maximum calculated using Fisher information. This means that our AI-supported algorithm is not only effective but almost optimal, achieving almost exactly the precision permitted by the laws of physics.

This realisation has far-reaching consequences: with the help of intelligent algorithms, optical measurement methods could be significantly improved in a wide range of areas – from medical diagnostics to materials research and quantum technology. In future projects, the research team wants to work with partners from applied physics and medicine to investigate how these AI-supported methods can be used in specific systems.

Computational Biology

A Quantum Leap Forward – New Amplifier Boosts Efficiency of Quantum Computers 10x

Chalmers engineers built a pulse-driven qubit amplifier that’s ten times more efficient, stays cool, and safeguards quantum states—key for bigger, better quantum machines.

Avatar photo

Published

on

By

Quantum computers have long been touted as revolutionary machines capable of solving complex problems that stymie conventional supercomputers. However, their full potential has been hindered by the limitations of qubit amplifiers – essential components required to read and interpret quantum information. Researchers at Chalmers University of Technology in Sweden have taken a significant step forward with the development of an ultra-efficient amplifier that reduces power consumption by 90%, paving the way for more powerful quantum computers with enhanced performance.

The new amplifier is pulse-operated, meaning it’s activated only when needed to amplify qubit signals, minimizing heat generation and decoherence. This innovation has far-reaching implications for scaling up quantum computers, as larger systems require more amplifiers, leading to increased power consumption and decreased accuracy. The Chalmers team’s breakthrough offers a solution to this challenge, enabling the development of more accurate readout systems for future generations of quantum computers.

One of the key challenges in developing pulse-operated amplifiers is ensuring they respond quickly enough to keep pace with qubit readout. To address this, the researchers employed genetic programming to develop a smart control system that enables rapid response times – just 35 nanoseconds. This achievement has significant implications for the future of quantum computing, as it paves the way for more accurate and powerful calculations.

The new amplifier was developed in collaboration with industry partners Low Noise Factory AB and utilizes the expertise of researchers at Chalmers’ Terahertz and Millimeter Wave Technology Laboratory. The study, published in IEEE Transactions on Microwave Theory and Techniques, demonstrates a novel approach to developing ultra-efficient amplifiers for qubit readout and offers promising prospects for future research.

In conclusion, the development of this highly efficient amplifier represents a significant leap forward for quantum computing. By reducing power consumption by 90%, researchers have opened doors to more powerful and accurate calculations, unlocking new possibilities in fields such as drug development, encryption, AI, and logistics. As the field continues to evolve, it will be exciting to see how this innovation shapes the future of quantum computing.

Continue Reading

Artificial Intelligence

AI Uncovers Hidden Heart Risks in CT Scans: A Game-Changer for Cardiovascular Care

What if your old chest scans—taken years ago for something unrelated—held a secret warning about your heart? A new AI tool called AI-CAC, developed by Mass General Brigham and the VA, can now comb through routine CT scans to detect hidden signs of heart disease before symptoms strike.

Avatar photo

Published

on

The Massachusetts General Brigham researchers have developed an innovative artificial intelligence (AI) tool called AI-CAC to analyze previously collected CT scans and identify individuals with high coronary artery calcium (CAC) levels, indicating a greater risk for cardiovascular events. Their research, published in NEJM AI, demonstrated the high accuracy and predictive value of AI-CAC for future heart attacks and 10-year mortality.

Millions of chest CT scans are taken each year, often in healthy people, to screen for lung cancer or other conditions. However, this study reveals that these scans can also provide valuable information about cardiovascular risk, which has been going unnoticed. The researchers found that AI-CAC had a high accuracy rate (89.4%) at determining whether a scan contained CAC or not.

The gold standard for quantifying CAC uses “gated” CT scans, synchronized to the heartbeat to reduce motion during the scan. However, most chest CT scans obtained for routine clinical purposes are “nongated.” The researchers developed AI-CAC, a deep learning algorithm, to probe through these nongated scans and quantify CAC.

The AI-CAC model was 87.3% accurate at determining whether the score was higher or lower than 100, indicating a moderate cardiovascular risk. Importantly, AI-CAC was also predictive of 10-year all-cause mortality, with those having a CAC score over 400 having a 3.49 times higher risk of death over a 10-year period.

The researchers hope to conduct future studies in the general population and test whether the tool can assess the impact of lipid-lowering medications on CAC scores. This could lead to the implementation of AI-CAC in clinical practice, enabling physicians to engage with patients earlier, before their heart disease advances to a cardiac event.

As Dr. Raffi Hagopian, first author and cardiologist at the VA Long Beach Healthcare System, emphasized, “Using AI for tasks like CAC detection can help shift medicine from a reactive approach to the proactive prevention of disease, reducing long-term morbidity, mortality, and healthcare costs.”

Continue Reading

Computer Modeling

Harnessing True Randomness from Entangled Photons: The Colorado University Randomness Beacon (CURBy)

Scientists at NIST and the University of Colorado Boulder have created CURBy, a cutting-edge quantum randomness beacon that draws on the intrinsic unpredictability of quantum entanglement to produce true random numbers. Unlike traditional methods, CURBy is traceable, transparent, and verifiable thanks to quantum physics and blockchain-like protocols. This breakthrough has real-world applications ranging from cybersecurity to public lotteries—and it’s open source, inviting the world to use and build upon it.

Avatar photo

Published

on

By

The Colorado University Randomness Beacon (CURBy) is a pioneering service that harnesses the true randomness of entangled photons to produce unguessable strings of numbers. This breakthrough was made possible by the work of scientists at the National Institute of Standards and Technology (NIST) and their colleagues at the University of Colorado Boulder.

“True randomness is something that nothing in the universe can predict in advance,” said Krister Shalm, a physicist at NIST. “If God does play dice with the universe, then you can turn that into the best random number generator that the universe allows.”

The CURBy system uses a Bell test to measure pairs of entangled photons whose properties are correlated even when separated by vast distances. When researchers measure an individual particle, the outcome is random, but the properties of the pair are more correlated than classical physics allows, enabling researchers to verify the randomness.

This is the first random number generator service to use quantum nonlocality as a source of its numbers, and the most transparent source of random numbers to date. The results are certifiable and traceable to a greater extent than ever before.

The CURBy system consists of a nonlinear crystal that generates entangled photons, which travel via optical fiber to separate labs at opposite ends of the hall. Once the photons reach the labs, their polarizations are measured. The outcomes of these measurements are truly random.

NIST passes millions of these quantum coin flips to a computer program at the University of Colorado Boulder, where special processing steps and strict protocols are used to turn the outcomes into 512 random bits of binary code (0s and 1s). The result is a set of random bits that no one, not even Einstein, could have predicted.

The CURBy system has been operational for several months now, with an impressive success rate of over 99.7%. The ability to verify the data behind each random number was made possible by the Twine protocol, a novel set of quantum-compatible blockchain technologies developed by NIST and its collaborators.

“The Twine protocol lets us weave together all these other beacons into a tapestry of trust,” said Jasper Palfree, a research assistant on the project at the University of Colorado Boulder. This allows any user to verify the data behind each random number, providing security and traceability.

The CURBy system can be used anywhere an independent, public source of random numbers would be useful, such as selecting jury candidates, making a random selection for an audit, or assigning resources through a public lottery.

“I wanted to build something that is useful. It’s this cool thing that is the cutting edge of fundamental science,” said Gautam Kavuri, a graduate student on the project. The whole process is open source and available to the public, allowing anyone to not only check their work but even build on the beacon to create their own random number generator.

The CURBy system has the potential to revolutionize fields such as cryptography, gaming, and finance, where true randomness is essential. By harnessing the power of entangled photons, scientists have created a truly independent source of random numbers that can be trusted.

Continue Reading

Trending