Connect with us
We’re experimenting with AI-generated content to help deliver information faster and more efficiently.
While we try to keep things accurate, this content is part of an ongoing experiment and may not always be reliable.
Please double-check important details — we’re not responsible for how the information is used.

Computer Programming

The Limits of Precision: How AI Can Help Us Reach the Edge of What Physics Allows

Scientists have uncovered how close we can get to perfect optical precision using AI, despite the physical limitations imposed by light itself. By combining physics theory with neural networks trained on distorted light patterns, they showed it’s possible to estimate object positions with nearly the highest accuracy allowed by nature. This breakthrough opens exciting new doors for applications in medical imaging, quantum tech, and materials science.

Avatar photo

Published

on

The concept of precision has been a cornerstone in physics for centuries. For 150 years, it has been known that no matter how advanced our technology becomes, there are fundamental limits to the precision with which we can measure physical phenomena. The position of a particle, for instance, can never be measured with infinite precision; a certain amount of blurring is unavoidable.

Recently, an international team of researchers from TU Wien (Vienna), the University of Glasgow, and the University of Grenoble posed a question: where is the absolute limit of precision that is possible with optical methods? And how can this limit be approached as closely as possible? The team’s findings have significant implications for a wide range of fields, including medicine.

To address this question, the researchers employed a theoretical measure known as Fisher information. This measure describes how much information an optical signal contains about an unknown parameter – such as the object position. By using Fisher information, the team was able to calculate an upper limit for the theoretically achievable precision in different experimental scenarios.

However, the calculation of this limit does not necessarily mean that it is impossible to achieve. In fact, a corresponding experiment designed by Dorian Bouchet from the University of Grenoble, together with Ilya Starshynov and Daniele Faccio from the University of Glasgow, showed that using artificial intelligence (AI) algorithms for neural networks can come very close to this limit.

In the experiment, a laser beam was directed at a small, reflective object located behind a turbid liquid. The measurement conditions varied depending on the turbidity – and therefore also the difficulty of obtaining precise position information from the signal. The recorded images only showed highly distorted light patterns that looked like random patterns to the human eye.

But when fed into a neural network, which was trained with many such images each with a known object position, the network could learn which patterns are associated with which positions. After sufficient training, the network was able to determine the object position very precisely, even with new, unknown patterns.

The precision of the prediction achieved by the AI-supported algorithm was only minimally worse than the theoretically achievable maximum calculated using Fisher information. This means that our AI-supported algorithm is not only effective but almost optimal, achieving almost exactly the precision permitted by the laws of physics.

This realisation has far-reaching consequences: with the help of intelligent algorithms, optical measurement methods could be significantly improved in a wide range of areas – from medical diagnostics to materials research and quantum technology. In future projects, the research team wants to work with partners from applied physics and medicine to investigate how these AI-supported methods can be used in specific systems.

Computer Graphics

Cracking the Code: Scientists Breakthrough in Quantum Computing with a Single Atom

A research team has created a quantum logic gate that uses fewer qubits by encoding them with the powerful GKP error-correction code. By entangling quantum vibrations inside a single atom, they achieved a milestone that could transform how quantum computers scale.

Avatar photo

Published

on

By

Scientists have achieved a major breakthrough in quantum computing by successfully cracking the code hidden within a single atom. To build a large-scale quantum computer that works, scientists and engineers need to overcome the spontaneous errors that quantum bits, or qubits, create as they operate.

The team at the Quantum Control Laboratory at the University of Sydney Nano Institute has demonstrated a type of quantum logic gate that drastically reduces the number physical qubits needed for its operation. They built an entangling logic gate on a single atom using an error-correcting code nicknamed the ‘Rosetta stone’ of quantum computing.

This curiously named Gottesman-Kitaev-Preskill (GKP) code has long offered a theoretical possibility for significantly reducing the physical number of qubits needed to produce a functioning ‘logical qubit.’ Albeit by trading efficiency for complexity, making the codes very difficult to control. The research published in Nature Physics demonstrates this as a physical reality.

Led by Sydney Horizon Fellow Dr Tingrei Tan at the University of Sydney Nano Institute, scientists have used their exquisite control over the harmonic motion of a trapped ion to bridge the coding complexity of GKP qubits, allowing a demonstration of their entanglement.

The team’s experiment has shown the first realization of a universal logical gate set for GKP qubits. They did this by precisely controlling the natural vibrations or harmonic oscillations of a trapped ion in such a way that they can manipulate individual GKP qubits or entangle them as a pair.

A logic gate is an information switch that allows computers – quantum and classical – to be programmable to perform logical operations. Quantum logic gates use the entanglement of qubits to produce a completely different sort of operational system to that used in classical computing, underpinning the great promise of quantum computers.

The researchers have effectively stored two error-correctable logical qubits in a single trapped ion and demonstrated entanglement between them using quantum control software developed by Q-CTRL. This result massively reduces the quantum hardware required to create these logic gates, which allow quantum machines to be programmed.

This research represents an important demonstration that quantum logic gates can be developed with a reduced physical number of qubits, increasing their efficiency. The authors declare no competing interests. Funding was received from various sources including the Australian Research Council and private funding from H. and A. Harley.

Continue Reading

Computer Graphics

The Quiet Threat to Trust: How Overreliance on AI Emails Can Harm Workplace Relationships

AI is now a routine part of workplace communication, with most professionals using tools like ChatGPT and Gemini. A study of over 1,000 professionals shows that while AI makes managers’ messages more polished, heavy reliance can damage trust. Employees tend to accept low-level AI help, such as grammar fixes, but become skeptical when supervisors use AI extensively, especially for personal or motivational messages. This “perception gap” can lead employees to question a manager’s sincerity, integrity, and leadership ability.

Avatar photo

Published

on

By

The use of artificial intelligence (AI) in writing and editing emails has become a common practice among professionals, with over 75% of them utilizing tools like ChatGPT, Gemini, Copilot, or Claude in their daily work. While these generative AI tools can make writing easier, research reveals that relying on them too heavily can undermine trust between managers and employees.

A study conducted by researchers Anthony Coman and Peter Cardon surveyed 1,100 professionals about their perceptions of emails written with low, medium, and high levels of AI assistance. The results showed a “perception gap” in messages written by managers versus those written by employees. When evaluating their own use of AI, participants tended to rate it similarly across different levels of assistance. However, when rating others’ use, the magnitude of AI assistance became important.

The study found that low levels of AI help, such as grammar or editing, were generally acceptable. However, higher levels of assistance triggered negative perceptions, especially among employees who perceived their managers’ reliance on AI-generated content as laziness or a lack of caring. This perception gap had a substantial impact on trust: only 40% to 52% of employees viewed supervisors as sincere when they used high levels of AI, compared to 83% for low-assistance messages.

The findings suggest that managers should carefully consider message type, level of AI assistance, and relational context before using AI in their writing. While AI may be suitable for informational or routine communications, relationship-oriented messages requiring empathy, praise, congratulations, motivation, or personal feedback are better handled with minimal technological intervention.

In essence, the quiet threat to trust posed by overreliance on AI emails is a reminder that while technology can enhance productivity and efficiency, it cannot replace human touch and emotional intelligence in workplace relationships.

Continue Reading

Computer Programming

Revolutionizing Materials Discovery: AI-Powered Lab Finds New Materials 10x Faster

A new leap in lab automation is shaking up how scientists discover materials. By switching from slow, traditional methods to real-time, dynamic chemical experiments, researchers have created a self-driving lab that collects 10 times more data, drastically accelerating progress. This new system not only saves time and resources but also paves the way for faster breakthroughs in clean energy, electronics, and sustainability—bringing us closer to a future where lab discoveries happen in days, not years.

Avatar photo

Published

on

By

The article you provided showcases a groundbreaking achievement in materials discovery research. A team of scientists has developed an AI-powered laboratory that can collect at least 10 times more data than previous techniques, drastically expediting the process while slashing costs and environmental impact. This self-driving laboratory combines machine learning and automation with chemical and materials sciences to discover materials more quickly.

The innovation lies in the implementation of dynamic flow experiments, where chemical mixtures are continuously varied through the system and monitored in real-time. This approach generates a vast amount of high-quality data, which is then used by the machine-learning algorithm to make smarter, faster decisions, honing in on optimal materials and processes.

The results are staggering: the self-driving lab can identify the best material candidates on its very first try after training, reducing the number of experiments needed and dramatically cutting down on chemical use and waste. This breakthrough has far-reaching implications for sustainable research practices and society’s toughest challenges.

The article highlights the work of Milad Abolhasani, corresponding author of the paper, who emphasizes that this achievement is not just about speed but also about responsible research practices. The future of materials discovery, he says, is not just about how fast we can go, but also about how responsibly we get there.

The paper, “Flow-Driven Data Intensification to Accelerate Autonomous Materials Discovery,” was published in the journal Nature Chemical Engineering and showcases a collaborative effort from multiple researchers and institutions. The work has been supported by the National Science Foundation and the University of North Carolina Research Opportunities Initiative program.

Continue Reading

Trending