Connect with us
We’re experimenting with AI-generated content to help deliver information faster and more efficiently.
While we try to keep things accurate, this content is part of an ongoing experiment and may not always be reliable.
Please double-check important details — we’re not responsible for how the information is used.

Computer Modeling

Awkward Truth: Humans Still Outshine AI in Reading Social Interactions

Humans are better than current AI models at interpreting social interactions and understanding social dynamics in moving scenes. Researchers believe this is because AI neural networks were inspired by the infrastructure of the part of the brain that processes static images, which is different from the area of the brain that processes dynamic social scenes.

Avatar photo

Published

on

Humans have long been known for their ability to read social interactions, nuances, and context. However, a recent study has found that current AI models still struggle to match human perception when it comes to describing and interpreting social dynamics in moving scenes.

The research, led by scientists at Johns Hopkins University, aimed to compare human and AI performance in understanding social interactions. The researchers presented participants with three-second videoclips of people interacting, performing side-by-side activities, or conducting independent activities on their own. They then asked the viewers to rate features important for understanding social interactions on a scale of one to five.

The results showed that participants agreed with each other’s ratings, but AI models, regardless of size or training data, failed to accurately predict human judgments and neural activity responses. Even image models, which were given still frames to analyze, couldn’t reliably determine whether people were communicating.

Interestingly, language models performed better at predicting human behavior, while video models fared better at predicting brain activity. This dichotomy suggests that AI systems are not yet equipped to understand the complexities of social interactions in dynamic scenes.

Researchers believe this is because AI neural networks were inspired by the infrastructure of the part of the brain that processes static images. In contrast, humans use a different area of the brain to process dynamic social scenes, which involves understanding relationships, context, and nuances.

The study’s findings highlight the limitations of current AI systems in interacting with humans. As researchers continue to develop more advanced AI models, it’s essential to address these blind spots and improve their ability to read social cues, contextualize interactions, and understand human behavior in real-world settings.

Ultimately, this research sheds light on the complexities of human social interaction and the need for more sophisticated AI systems that can accurately comprehend and respond to dynamic social scenes. As we move forward with AI development, it’s crucial to prioritize understanding these nuances and developing models that can match human capabilities.

Artificial Intelligence

Shedding Light on Shadow Branches: Revolutionizing Computing Efficiency in Modern Data Centers

Researchers have developed a new technique called ‘Skia’ to help computer processors better predict future instructions and improve computing performance.

Avatar photo

Published

on

By

The collaboration between trailblazing engineers and industry professionals has led to a groundbreaking technique called Skia, which may transform the future of computing efficiency for modern data centers.

In data centers, large computers process massive amounts of data, but often struggle to keep up due to taxing workloads. This results in slower performance, causing search engines to generate answers more slowly or not at all. To address this issue, researchers at Texas A&M University have developed Skia in collaboration with Intel, AheadComputing, and Princeton.

The team includes Dr. Paul V. Gratz, a professor in the Department of Electrical and Computer Engineering, Dr. Daniel A. Jiménez, a professor in the Department of Computer Science and Engineering, and Chrysanthos Pepi, a graduate student in the Department of Electrical and Computer Engineering.

Processing instructions has become a major bottleneck in modern processor design,” Gratz said. “We developed Skia to better predict what’s coming next and alleviate that bottleneck.” Skia can not only help better predict future instructions but also improve the throughput of instructions on the system, leading to quicker performance and less power consumption for the data center.

Think of throughput in terms of being a server in a restaurant,” Gratz said. “You have lots and lots of jobs to do. How many tasks can you complete or how many instructions can you execute per unit time? You want high throughput, especially for computing.”

Improving throughput can lead to quicker performance and less power consumption for the data center. In fact, making it up to 10% more efficient means a company previously needing to make 100 data centers around the country now only needs to make 90, which is 10 fewer data centers. That’s pretty significant. These data centers cost millions of dollars, and they consume roughly the equivalent of the entire output of a power plant.

Skia identifies and decodes these shadow branches in unused bytes, storing them in a memory area called the Shadow Branch Buffer, which can be accessed alongside the BTB. What makes this technique interesting is that most of the future instructions were already available, and we demonstrate that Skia, with a minimal hardware budget, can make data centers more efficient, nearly twice the performance improvement versus adding the same amount of storage to the existing hardware as we observe,” Pepi said.

Their findings, “Skia: Exposing Shadow Branches,” were published in one of the leading computer architecture conferences, the ACM International Conference on Architectural Support for Programming Languages and Operating Systems. The team also traveled to the Netherlands to present their work to colleagues from around the globe.

Funding for this research is administered by the Texas A&M Engineering Experiment Station (TEES), the official research agency for Texas A&M Engineering.

Continue Reading

Breast Cancer

Early Cancer Detection: New Algorithms Revolutionize Primary Care

Two new advanced predictive algorithms use information about a person’s health conditions and simple blood tests to accurately predict a patient’s chances of having a currently undiagnosed cancer, including hard to diagnose liver and oral cancers. The new models could revolutionize how cancer is detected in primary care, and make it easier for patients to get treatment at much earlier stages.

Avatar photo

Published

on

Early Cancer Detection: New Algorithms Revolutionize Primary Care

Two groundbreaking predictive algorithms have been developed to help General Practitioners (GPs) identify patients who may have undiagnosed cancer, including hard-to-detect liver and oral cancers. These advanced models use information about a patient’s health conditions and simple blood tests to accurately predict their chances of having an undiagnosed cancer.

The National Health Service (NHS) currently uses algorithms like the QCancer scores to combine relevant patient data and identify individuals at high risk of having undiagnosed cancer, allowing GPs and specialists to call them in for further testing. Researchers from Queen Mary University of London and the University of Oxford have created two new algorithms using anonymized electronic health records from over 7.4 million adults in England.

The new models are significantly more sensitive than existing ones, potentially leading to better clinical decision-making and earlier cancer diagnosis. Crucially, these algorithms incorporate the results of seven routine blood tests as biomarkers to improve early cancer detection. This approach makes it easier for patients to receive treatment at much earlier stages, increasing their chances of survival.

Compared to the QCancer algorithms, the new models identified four additional medical conditions associated with an increased risk of 15 different cancers, including liver, kidney, and pancreatic cancers. The researchers also found two additional associations between family history and lung cancer and blood cancer, as well as seven new symptoms of concern (itching, bruising, back pain, hoarseness, flatulence, abdominal mass, dark urine) associated with multiple cancer types.

The study’s lead author, Professor Julia Hippisley-Cox, said: “These algorithms are designed to be embedded into clinical systems and used during routine GP consultations. They offer a substantial improvement over current models, with higher accuracy in identifying cancers – especially at early, more treatable stages.”

Dr Carol Coupland, senior researcher and co-author, added: “These new algorithms for assessing individuals’ risks of having currently undiagnosed cancer show improved capability of identifying people most at risk of having one of 15 types of cancer based on their symptoms, blood test results, lifestyle factors, and other information recorded in their medical records.”

Continue Reading

Biodiversity

Unlocking AI’s Potential: A New Era for Biodiversity Conservation

A new study suggests the use of artificial intelligence (AI) to rapidly analyze vast amounts of biodiversity data could revolutionize conservation efforts by enabling scientists and policymakers to make better-informed decisions.

Avatar photo

Published

on

Unlocking AI’s Potential: A New Era for Biodiversity Conservation

Scientists from McGill University have made a groundbreaking discovery, revealing the untapped potential of artificial intelligence (AI) to revolutionize biodiversity conservation. A recent study published in Nature Reviews Biodiversity highlights the seven global biodiversity knowledge shortfalls, which hinder our understanding of species distributions and interactions.

“The problem is that we still don’t have basic information about nature, which prevents us from knowing how to protect it,” said Laura Pollock, lead author on the study and assistant professor in McGill’s Department of Biology. “This research aims to bridge this knowledge gap by leveraging AI’s capabilities to analyze vast amounts of biodiversity data.”

The study, a collaboration between computer scientists, ecologists, and an international team of researchers, examines how AI can address the seven global biodiversity knowledge shortfalls. The findings show that AI is currently only being used in two of these areas, leaving significant opportunities untapped.

One example of AI’s potential is BioCLIP, which uses machine learning models to detect species traits from images, aiding in species identification. Additionally, automated insect monitoring platforms like Antenna have helped identify hundreds of new insects.

However, the researchers emphasize that AI can do more. Machine learning models trained on satellite imagery and environmental DNA can map species distributions more accurately than ever before. AI could also help infer species interactions, such as food webs and predator-prey relationships, which remain largely unstudied due to the difficulty of direct observation.

“This research looks at a much broader set of biodiversity questions than previous reviews,” said David Rolnick, co-author of the study, Canada CIFAR AI Chair and assistant professor of computer science at McGill. “It was also surprising to see just how narrowly AI is being applied when it has so much potential to address many of these shortfalls.”

Looking ahead, the research team emphasizes the importance of expanding data-sharing initiatives to improve AI model training, refining algorithms to reduce biases, and ensuring that AI is used ethically in conservation. With global biodiversity targets looming, they say AI, if harnessed effectively, could be one of the most powerful tools available to address the biodiversity crisis.

“AI is changing the way the world works, for better or worse,” said Pollock. “This is one of the ways it could help us.” Protecting biodiversity is crucial because ecosystems sustain human life, and AI can play a vital role in preserving our planet’s precious natural resources.

Continue Reading

Trending