Connect with us
We’re experimenting with AI-generated content to help deliver information faster and more efficiently.
While we try to keep things accurate, this content is part of an ongoing experiment and may not always be reliable.
Please double-check important details — we’re not responsible for how the information is used.

Computers & Math

The Eye-Brain Connection: How Our Thoughts Shape What We See

A new study by biomedical engineers and neuroscientists shows that the brain’s visual regions play an active role in making sense of information.

Avatar photo

Published

on

The way we see the world is not just about what our eyes take in; it’s also about how our brain processes that information. A new study has shed light on this complex relationship, revealing that the visual regions of the brain play an active role in making sense of what we see. This means that our thoughts and experiences can influence what we perceive, even before our prefrontal cortex (the part of the brain responsible for reasoning and decision-making) gets a chance to weigh in.

Imagine you’re at the grocery store, looking at a bag of carrots. Depending on your plans for the day – perhaps making a hearty winter stew or preparing for a Super Bowl party – your mind might immediately think of potatoes, parsnips, or buffalo wings. This is not just about categorizing an object; it’s about how our brain uses context and past experiences to shape what we see.

The study, led by Nuttida Rungratsameetaweemana, a biomedical engineer and neuroscientist at Columbia Engineering, has provided some of the clearest evidence yet that early sensory systems play a role in decision-making. This means that even before our brain’s prefrontal cortex kicks in, our visual system is already processing information and making connections based on what we’re thinking about.

The implications of this study are significant. It suggests that designing artificial intelligence (AI) systems that can adapt to new or unexpected situations might be more achievable than previously thought. By understanding how the brain’s visual regions interact with other parts of the brain, researchers may be able to develop AI systems that can learn and respond in a more human-like way.

In summary, this study highlights the complex relationship between our eyes, brain, and thoughts. It shows that what we see is not just about the physical world; it’s also about how our brain processes that information based on past experiences and current context.

Computers & Math

Quantum Computers Just Beat Classical Ones – Exponentially and Unconditionally

A research team has achieved the holy grail of quantum computing: an exponential speedup that’s unconditional. By using clever error correction and IBM’s powerful 127-qubit processors, they tackled a variation of Simon’s problem, showing quantum machines are now breaking free from classical limitations, for real.

Avatar photo

Published

on

By

Quantum computers have been touted as potential game-changers for computation, medicine, coding, and material discovery – but only when they truly function. One major obstacle has been noise or errors produced during computations on a quantum machine, making them less powerful than classical computers – until recently.

Daniel Lidar, holder of the Viterbi Professorship in Engineering and Professor of Electrical & Computing Engineering at USC Viterbi School of Engineering, has made significant strides in quantum error correction. In a recent study with collaborators at USC and Johns Hopkins, he demonstrated a quantum exponential scaling advantage using two 127-qubit IBM Quantum Eagle processor-powered quantum computers over the cloud.

The key milestone for quantum computing, Lidar says, is to demonstrate that we can execute entire algorithms with a scaling speedup relative to ordinary “classical” computers. An exponential speedup means that as you increase a problem’s size by including more variables, the gap between the quantum and classical performance keeps growing – roughly doubling for every additional variable.

Lidar clarifies that this type of speedup is unconditional, meaning it doesn’t rely on unproven assumptions. Prior speedup claims required assuming there was no better classical algorithm against which to benchmark the quantum algorithm. This study used an algorithm modified for the quantum computer to solve a variation of “Simon’s problem,” an early example of quantum algorithms that can solve tasks exponentially faster than any classical counterpart, unconditionally.

Simon’s problem involves finding a hidden repeating pattern in a mathematical function and is considered the precursor to Shor’s factoring algorithm, which can be used to break codes. Quantum players can win this game exponentially faster than classical players.

The team achieved their exponential speedup by squeezing every ounce of performance from the hardware: shorter circuits, smarter pulse sequences, and statistical error mitigation. They limited data input, compressed quantum logic operations using transpilation, applied dynamical decoupling to detach qubits from noise, and used measurement error mitigation to correct errors left over after dynamical decoupling.

Lidar says that this result shows today’s quantum computers firmly lie on the side of a scaling quantum advantage. The performance separation cannot be reversed because the exponential speedup is unconditional – making it increasingly difficult to dispute. Next steps include demonstrating practical real-world applications, reducing noise and decoherence in ever larger quantum computers, and addressing the lack of oracle-based speedups.

Continue Reading

Computational Biology

A Quantum Leap Forward – New Amplifier Boosts Efficiency of Quantum Computers 10x

Chalmers engineers built a pulse-driven qubit amplifier that’s ten times more efficient, stays cool, and safeguards quantum states—key for bigger, better quantum machines.

Avatar photo

Published

on

By

Quantum computers have long been touted as revolutionary machines capable of solving complex problems that stymie conventional supercomputers. However, their full potential has been hindered by the limitations of qubit amplifiers – essential components required to read and interpret quantum information. Researchers at Chalmers University of Technology in Sweden have taken a significant step forward with the development of an ultra-efficient amplifier that reduces power consumption by 90%, paving the way for more powerful quantum computers with enhanced performance.

The new amplifier is pulse-operated, meaning it’s activated only when needed to amplify qubit signals, minimizing heat generation and decoherence. This innovation has far-reaching implications for scaling up quantum computers, as larger systems require more amplifiers, leading to increased power consumption and decreased accuracy. The Chalmers team’s breakthrough offers a solution to this challenge, enabling the development of more accurate readout systems for future generations of quantum computers.

One of the key challenges in developing pulse-operated amplifiers is ensuring they respond quickly enough to keep pace with qubit readout. To address this, the researchers employed genetic programming to develop a smart control system that enables rapid response times – just 35 nanoseconds. This achievement has significant implications for the future of quantum computing, as it paves the way for more accurate and powerful calculations.

The new amplifier was developed in collaboration with industry partners Low Noise Factory AB and utilizes the expertise of researchers at Chalmers’ Terahertz and Millimeter Wave Technology Laboratory. The study, published in IEEE Transactions on Microwave Theory and Techniques, demonstrates a novel approach to developing ultra-efficient amplifiers for qubit readout and offers promising prospects for future research.

In conclusion, the development of this highly efficient amplifier represents a significant leap forward for quantum computing. By reducing power consumption by 90%, researchers have opened doors to more powerful and accurate calculations, unlocking new possibilities in fields such as drug development, encryption, AI, and logistics. As the field continues to evolve, it will be exciting to see how this innovation shapes the future of quantum computing.

Continue Reading

Artificial Intelligence

AI Uncovers Hidden Heart Risks in CT Scans: A Game-Changer for Cardiovascular Care

What if your old chest scans—taken years ago for something unrelated—held a secret warning about your heart? A new AI tool called AI-CAC, developed by Mass General Brigham and the VA, can now comb through routine CT scans to detect hidden signs of heart disease before symptoms strike.

Avatar photo

Published

on

The Massachusetts General Brigham researchers have developed an innovative artificial intelligence (AI) tool called AI-CAC to analyze previously collected CT scans and identify individuals with high coronary artery calcium (CAC) levels, indicating a greater risk for cardiovascular events. Their research, published in NEJM AI, demonstrated the high accuracy and predictive value of AI-CAC for future heart attacks and 10-year mortality.

Millions of chest CT scans are taken each year, often in healthy people, to screen for lung cancer or other conditions. However, this study reveals that these scans can also provide valuable information about cardiovascular risk, which has been going unnoticed. The researchers found that AI-CAC had a high accuracy rate (89.4%) at determining whether a scan contained CAC or not.

The gold standard for quantifying CAC uses “gated” CT scans, synchronized to the heartbeat to reduce motion during the scan. However, most chest CT scans obtained for routine clinical purposes are “nongated.” The researchers developed AI-CAC, a deep learning algorithm, to probe through these nongated scans and quantify CAC.

The AI-CAC model was 87.3% accurate at determining whether the score was higher or lower than 100, indicating a moderate cardiovascular risk. Importantly, AI-CAC was also predictive of 10-year all-cause mortality, with those having a CAC score over 400 having a 3.49 times higher risk of death over a 10-year period.

The researchers hope to conduct future studies in the general population and test whether the tool can assess the impact of lipid-lowering medications on CAC scores. This could lead to the implementation of AI-CAC in clinical practice, enabling physicians to engage with patients earlier, before their heart disease advances to a cardiac event.

As Dr. Raffi Hagopian, first author and cardiologist at the VA Long Beach Healthcare System, emphasized, “Using AI for tasks like CAC detection can help shift medicine from a reactive approach to the proactive prevention of disease, reducing long-term morbidity, mortality, and healthcare costs.”

Continue Reading

Trending