Connect with us

Computer Modeling

Awkward Truth: Humans Still Outshine AI in Reading Social Interactions

Humans are better than current AI models at interpreting social interactions and understanding social dynamics in moving scenes. Researchers believe this is because AI neural networks were inspired by the infrastructure of the part of the brain that processes static images, which is different from the area of the brain that processes dynamic social scenes.

Avatar photo

Published

on

Humans have long been known for their ability to read social interactions, nuances, and context. However, a recent study has found that current AI models still struggle to match human perception when it comes to describing and interpreting social dynamics in moving scenes.

The research, led by scientists at Johns Hopkins University, aimed to compare human and AI performance in understanding social interactions. The researchers presented participants with three-second videoclips of people interacting, performing side-by-side activities, or conducting independent activities on their own. They then asked the viewers to rate features important for understanding social interactions on a scale of one to five.

The results showed that participants agreed with each other’s ratings, but AI models, regardless of size or training data, failed to accurately predict human judgments and neural activity responses. Even image models, which were given still frames to analyze, couldn’t reliably determine whether people were communicating.

Interestingly, language models performed better at predicting human behavior, while video models fared better at predicting brain activity. This dichotomy suggests that AI systems are not yet equipped to understand the complexities of social interactions in dynamic scenes.

Researchers believe this is because AI neural networks were inspired by the infrastructure of the part of the brain that processes static images. In contrast, humans use a different area of the brain to process dynamic social scenes, which involves understanding relationships, context, and nuances.

The study’s findings highlight the limitations of current AI systems in interacting with humans. As researchers continue to develop more advanced AI models, it’s essential to address these blind spots and improve their ability to read social cues, contextualize interactions, and understand human behavior in real-world settings.

Ultimately, this research sheds light on the complexities of human social interaction and the need for more sophisticated AI systems that can accurately comprehend and respond to dynamic social scenes. As we move forward with AI development, it’s crucial to prioritize understanding these nuances and developing models that can match human capabilities.

Breast Cancer

Early Cancer Detection: New Algorithms Revolutionize Primary Care

Two new advanced predictive algorithms use information about a person’s health conditions and simple blood tests to accurately predict a patient’s chances of having a currently undiagnosed cancer, including hard to diagnose liver and oral cancers. The new models could revolutionize how cancer is detected in primary care, and make it easier for patients to get treatment at much earlier stages.

Avatar photo

Published

on

Early Cancer Detection: New Algorithms Revolutionize Primary Care

Two groundbreaking predictive algorithms have been developed to help General Practitioners (GPs) identify patients who may have undiagnosed cancer, including hard-to-detect liver and oral cancers. These advanced models use information about a patient’s health conditions and simple blood tests to accurately predict their chances of having an undiagnosed cancer.

The National Health Service (NHS) currently uses algorithms like the QCancer scores to combine relevant patient data and identify individuals at high risk of having undiagnosed cancer, allowing GPs and specialists to call them in for further testing. Researchers from Queen Mary University of London and the University of Oxford have created two new algorithms using anonymized electronic health records from over 7.4 million adults in England.

The new models are significantly more sensitive than existing ones, potentially leading to better clinical decision-making and earlier cancer diagnosis. Crucially, these algorithms incorporate the results of seven routine blood tests as biomarkers to improve early cancer detection. This approach makes it easier for patients to receive treatment at much earlier stages, increasing their chances of survival.

Compared to the QCancer algorithms, the new models identified four additional medical conditions associated with an increased risk of 15 different cancers, including liver, kidney, and pancreatic cancers. The researchers also found two additional associations between family history and lung cancer and blood cancer, as well as seven new symptoms of concern (itching, bruising, back pain, hoarseness, flatulence, abdominal mass, dark urine) associated with multiple cancer types.

The study’s lead author, Professor Julia Hippisley-Cox, said: “These algorithms are designed to be embedded into clinical systems and used during routine GP consultations. They offer a substantial improvement over current models, with higher accuracy in identifying cancers – especially at early, more treatable stages.”

Dr Carol Coupland, senior researcher and co-author, added: “These new algorithms for assessing individuals’ risks of having currently undiagnosed cancer show improved capability of identifying people most at risk of having one of 15 types of cancer based on their symptoms, blood test results, lifestyle factors, and other information recorded in their medical records.”

Continue Reading

Biodiversity

Unlocking AI’s Potential: A New Era for Biodiversity Conservation

A new study suggests the use of artificial intelligence (AI) to rapidly analyze vast amounts of biodiversity data could revolutionize conservation efforts by enabling scientists and policymakers to make better-informed decisions.

Avatar photo

Published

on

Unlocking AI’s Potential: A New Era for Biodiversity Conservation

Scientists from McGill University have made a groundbreaking discovery, revealing the untapped potential of artificial intelligence (AI) to revolutionize biodiversity conservation. A recent study published in Nature Reviews Biodiversity highlights the seven global biodiversity knowledge shortfalls, which hinder our understanding of species distributions and interactions.

“The problem is that we still don’t have basic information about nature, which prevents us from knowing how to protect it,” said Laura Pollock, lead author on the study and assistant professor in McGill’s Department of Biology. “This research aims to bridge this knowledge gap by leveraging AI’s capabilities to analyze vast amounts of biodiversity data.”

The study, a collaboration between computer scientists, ecologists, and an international team of researchers, examines how AI can address the seven global biodiversity knowledge shortfalls. The findings show that AI is currently only being used in two of these areas, leaving significant opportunities untapped.

One example of AI’s potential is BioCLIP, which uses machine learning models to detect species traits from images, aiding in species identification. Additionally, automated insect monitoring platforms like Antenna have helped identify hundreds of new insects.

However, the researchers emphasize that AI can do more. Machine learning models trained on satellite imagery and environmental DNA can map species distributions more accurately than ever before. AI could also help infer species interactions, such as food webs and predator-prey relationships, which remain largely unstudied due to the difficulty of direct observation.

“This research looks at a much broader set of biodiversity questions than previous reviews,” said David Rolnick, co-author of the study, Canada CIFAR AI Chair and assistant professor of computer science at McGill. “It was also surprising to see just how narrowly AI is being applied when it has so much potential to address many of these shortfalls.”

Looking ahead, the research team emphasizes the importance of expanding data-sharing initiatives to improve AI model training, refining algorithms to reduce biases, and ensuring that AI is used ethically in conservation. With global biodiversity targets looming, they say AI, if harnessed effectively, could be one of the most powerful tools available to address the biodiversity crisis.

“AI is changing the way the world works, for better or worse,” said Pollock. “This is one of the ways it could help us.” Protecting biodiversity is crucial because ecosystems sustain human life, and AI can play a vital role in preserving our planet’s precious natural resources.

Continue Reading

Chemistry

Unlocking Real-World Physics with MagicTime: A Revolutionary Text-to-Video AI Model

Computer scientists have developed a new AI text-to-video model that learns real-world physics knowledge from time-lapse videos.

Avatar photo

Published

on

By

Imagine being able to watch a video of a flower blooming or a tree growing before your eyes. This is no longer just a fantasy, thanks to the rapid advancements in text-to-video artificial intelligence (AI) models. While these models have struggled to produce metamorphic videos, simulating real-world processes like growth and change has been a significant challenge.

However, researchers from the University of Rochester, Peking University, University of California, Santa Cruz, and National University of Singapore have made a groundbreaking breakthrough. They’ve developed a new AI text-to-video model called MagicTime, which can learn and mimic real-world physics knowledge from time-lapse videos. This revolutionary model is outlined in a paper published in IEEE Transactions on Pattern Analysis and Machine Intelligence.

MagicTime has taken an evolutionary step towards simulating the physical, chemical, biological, or social properties of our world. According to Jinfa Huang, a PhD student supervised by Professor Jiebo Luo from Rochester’s Department of Computer Science, “Artificial intelligence has been developed to try to understand the real world and to simulate the activities and events that take place.” MagicTime is an essential step towards creating AI that can better understand and mimic the world around us.

The researchers trained MagicTime using a high-quality dataset of over 2,000 time-lapse videos with detailed captions. This enabled the model to learn and generate videos with limited motion and poor variations. Currently, the open-source U-Net version of MagicTime generates two-second, 512-by-512-pixel clips (at 8 frames per second), while an accompanying diffusion-transformer architecture extends this to ten-second clips.

The possibilities with MagicTime are vast. The model can be used to simulate not only biological metamorphosis but also buildings undergoing construction or bread baking in the oven. While the videos generated are visually interesting and the demo can be fun to play with, the researchers view this as an important step towards more sophisticated models that could provide essential tools for scientists.

“Our hope is that someday, for example, biologists could use generative video to speed up preliminary exploration of ideas,” says Huang. “While physical experiments remain indispensable for final verification, accurate simulations can shorten iteration cycles and reduce the number of live trials needed.”

The future of MagicTime is bright, and its potential applications are vast. As AI continues to evolve and improve, it’s exciting to think about the possibilities that this revolutionary text-to-video model will bring.

Continue Reading

Trending