Connect with us

Computer Modeling

“Beyond Assistants: Human Researchers Remain Essential in AI-Powered Research”

Researchers asked generative AI to write a research paper. While adept at some steps, it wholly failed at others.

Avatar photo

Published

on

The article you provided has been rewritten to improve clarity, structure, and style while maintaining the core ideas. Here’s the rewritten content:

Human researchers still hold a vital place in the research process, despite the growing capabilities of artificial intelligence (AI). A recent study conducted by University of Florida researchers found that popular AI models like OpenAI’s ChatGPT, Microsoft’s Copilot, and Google’s Gemini can be useful assistants but fall short of replacing human scientists in many critical areas.

The study, titled “AI and the Advent of the Cyborg Behavioral Scientist,” examined how well these AI systems could handle various stages of academic research. The team tested these models through six stages: ideation, literature review, research design, documenting results, extending the research, and manuscript production.

While AI performed well in certain areas, such as ideation and research design, it struggled to produce valuable outputs in other stages like literature review, results analysis, and manuscript production. The researchers found that human oversight was necessary to verify and refine AI-generated outputs.

“The specific steps that bring us joy (and angst) as researchers are likely as varied as the research in which they are used,” said Geoff Tomaino, an assistant professor in marketing at the University of Florida Warrington College of Business. “As these AI tools evolve, it will be up to each individual researcher to decide for which steps of the research process they want to become a cyborg behavioral researcher, and for which they would like to remain simply human.”

The study’s findings have significant implications for researchers and journals alike. The University of Florida team advises researchers to maintain high skepticism toward AI outputs, treating them as starting points that require human verification and refinement. Journals are also encouraged to consider policies that call out AI assistance in research papers and largely prohibit the use of AI in the research review process.

This research is published in the Journal of Consumer Psychology, highlighting the importance of human researchers in the academic research process.

Breast Cancer

Early Cancer Detection: New Algorithms Revolutionize Primary Care

Two new advanced predictive algorithms use information about a person’s health conditions and simple blood tests to accurately predict a patient’s chances of having a currently undiagnosed cancer, including hard to diagnose liver and oral cancers. The new models could revolutionize how cancer is detected in primary care, and make it easier for patients to get treatment at much earlier stages.

Avatar photo

Published

on

Early Cancer Detection: New Algorithms Revolutionize Primary Care

Two groundbreaking predictive algorithms have been developed to help General Practitioners (GPs) identify patients who may have undiagnosed cancer, including hard-to-detect liver and oral cancers. These advanced models use information about a patient’s health conditions and simple blood tests to accurately predict their chances of having an undiagnosed cancer.

The National Health Service (NHS) currently uses algorithms like the QCancer scores to combine relevant patient data and identify individuals at high risk of having undiagnosed cancer, allowing GPs and specialists to call them in for further testing. Researchers from Queen Mary University of London and the University of Oxford have created two new algorithms using anonymized electronic health records from over 7.4 million adults in England.

The new models are significantly more sensitive than existing ones, potentially leading to better clinical decision-making and earlier cancer diagnosis. Crucially, these algorithms incorporate the results of seven routine blood tests as biomarkers to improve early cancer detection. This approach makes it easier for patients to receive treatment at much earlier stages, increasing their chances of survival.

Compared to the QCancer algorithms, the new models identified four additional medical conditions associated with an increased risk of 15 different cancers, including liver, kidney, and pancreatic cancers. The researchers also found two additional associations between family history and lung cancer and blood cancer, as well as seven new symptoms of concern (itching, bruising, back pain, hoarseness, flatulence, abdominal mass, dark urine) associated with multiple cancer types.

The study’s lead author, Professor Julia Hippisley-Cox, said: “These algorithms are designed to be embedded into clinical systems and used during routine GP consultations. They offer a substantial improvement over current models, with higher accuracy in identifying cancers – especially at early, more treatable stages.”

Dr Carol Coupland, senior researcher and co-author, added: “These new algorithms for assessing individuals’ risks of having currently undiagnosed cancer show improved capability of identifying people most at risk of having one of 15 types of cancer based on their symptoms, blood test results, lifestyle factors, and other information recorded in their medical records.”

Continue Reading

Biodiversity

Unlocking AI’s Potential: A New Era for Biodiversity Conservation

A new study suggests the use of artificial intelligence (AI) to rapidly analyze vast amounts of biodiversity data could revolutionize conservation efforts by enabling scientists and policymakers to make better-informed decisions.

Avatar photo

Published

on

Unlocking AI’s Potential: A New Era for Biodiversity Conservation

Scientists from McGill University have made a groundbreaking discovery, revealing the untapped potential of artificial intelligence (AI) to revolutionize biodiversity conservation. A recent study published in Nature Reviews Biodiversity highlights the seven global biodiversity knowledge shortfalls, which hinder our understanding of species distributions and interactions.

“The problem is that we still don’t have basic information about nature, which prevents us from knowing how to protect it,” said Laura Pollock, lead author on the study and assistant professor in McGill’s Department of Biology. “This research aims to bridge this knowledge gap by leveraging AI’s capabilities to analyze vast amounts of biodiversity data.”

The study, a collaboration between computer scientists, ecologists, and an international team of researchers, examines how AI can address the seven global biodiversity knowledge shortfalls. The findings show that AI is currently only being used in two of these areas, leaving significant opportunities untapped.

One example of AI’s potential is BioCLIP, which uses machine learning models to detect species traits from images, aiding in species identification. Additionally, automated insect monitoring platforms like Antenna have helped identify hundreds of new insects.

However, the researchers emphasize that AI can do more. Machine learning models trained on satellite imagery and environmental DNA can map species distributions more accurately than ever before. AI could also help infer species interactions, such as food webs and predator-prey relationships, which remain largely unstudied due to the difficulty of direct observation.

“This research looks at a much broader set of biodiversity questions than previous reviews,” said David Rolnick, co-author of the study, Canada CIFAR AI Chair and assistant professor of computer science at McGill. “It was also surprising to see just how narrowly AI is being applied when it has so much potential to address many of these shortfalls.”

Looking ahead, the research team emphasizes the importance of expanding data-sharing initiatives to improve AI model training, refining algorithms to reduce biases, and ensuring that AI is used ethically in conservation. With global biodiversity targets looming, they say AI, if harnessed effectively, could be one of the most powerful tools available to address the biodiversity crisis.

“AI is changing the way the world works, for better or worse,” said Pollock. “This is one of the ways it could help us.” Protecting biodiversity is crucial because ecosystems sustain human life, and AI can play a vital role in preserving our planet’s precious natural resources.

Continue Reading

Chemistry

Unlocking Real-World Physics with MagicTime: A Revolutionary Text-to-Video AI Model

Computer scientists have developed a new AI text-to-video model that learns real-world physics knowledge from time-lapse videos.

Avatar photo

Published

on

By

Imagine being able to watch a video of a flower blooming or a tree growing before your eyes. This is no longer just a fantasy, thanks to the rapid advancements in text-to-video artificial intelligence (AI) models. While these models have struggled to produce metamorphic videos, simulating real-world processes like growth and change has been a significant challenge.

However, researchers from the University of Rochester, Peking University, University of California, Santa Cruz, and National University of Singapore have made a groundbreaking breakthrough. They’ve developed a new AI text-to-video model called MagicTime, which can learn and mimic real-world physics knowledge from time-lapse videos. This revolutionary model is outlined in a paper published in IEEE Transactions on Pattern Analysis and Machine Intelligence.

MagicTime has taken an evolutionary step towards simulating the physical, chemical, biological, or social properties of our world. According to Jinfa Huang, a PhD student supervised by Professor Jiebo Luo from Rochester’s Department of Computer Science, “Artificial intelligence has been developed to try to understand the real world and to simulate the activities and events that take place.” MagicTime is an essential step towards creating AI that can better understand and mimic the world around us.

The researchers trained MagicTime using a high-quality dataset of over 2,000 time-lapse videos with detailed captions. This enabled the model to learn and generate videos with limited motion and poor variations. Currently, the open-source U-Net version of MagicTime generates two-second, 512-by-512-pixel clips (at 8 frames per second), while an accompanying diffusion-transformer architecture extends this to ten-second clips.

The possibilities with MagicTime are vast. The model can be used to simulate not only biological metamorphosis but also buildings undergoing construction or bread baking in the oven. While the videos generated are visually interesting and the demo can be fun to play with, the researchers view this as an important step towards more sophisticated models that could provide essential tools for scientists.

“Our hope is that someday, for example, biologists could use generative video to speed up preliminary exploration of ideas,” says Huang. “While physical experiments remain indispensable for final verification, accurate simulations can shorten iteration cycles and reduce the number of live trials needed.”

The future of MagicTime is bright, and its potential applications are vast. As AI continues to evolve and improve, it’s exciting to think about the possibilities that this revolutionary text-to-video model will bring.

Continue Reading

Trending