Connect with us
We’re experimenting with AI-generated content to help deliver information faster and more efficiently.
While we try to keep things accurate, this content is part of an ongoing experiment and may not always be reliable.
Please double-check important details — we’re not responsible for how the information is used.

Education and Employment

The Power of Teamwork: How Group Work Environments Boost Student Motivation in Project-Based Learning

A researcher investigated the impact of the group work environment on motivation in English as a second language classes. The study revealed that the group work environment plays an important role in motivating students.

Avatar photo

Published

on

Project-based learning (PBL) has become an essential technique in foreign language and general education classes to develop skills through various challenges. However, the impact of group work environments and team size on student motivation remains poorly understood. Research by Associate Professor Mitsuko Tanaka at Osaka Metropolitan University’s Graduate School of Sustainable System Sciences sheds light on this crucial aspect.

Tanaka conducted a study involving 154 university students who had taken an English as a second language class. The students were divided into 50 groups, ranging from three to five members, and tasked with topic-based projects and presentations. A questionnaire was distributed at the end of the semester to assess the group work environment, taking into account individual factors such as learner beliefs and competence.

The analysis revealed that there was no significant effect due to group size. However, a notable difference emerged depending on the quality of the group work environment and individual factors. Notably, when the group work environment was conducive, motivation tended to increase regardless of these factors.

Professor Tanaka emphasized the importance of proper environmental preparation for PBL success: “These findings can serve as an essential guideline for educational practitioners to recognize the significance of a well-structured group work environment in project-based learning.”

The study’s results were published in System. The research highlights the critical role that group work environments play in fostering student motivation and achieving the goals of project-based learning. As educators, recognizing this aspect can lead to more effective teaching practices and better outcomes for students.

Computer Graphics

The Quiet Threat to Trust: How Overreliance on AI Emails Can Harm Workplace Relationships

AI is now a routine part of workplace communication, with most professionals using tools like ChatGPT and Gemini. A study of over 1,000 professionals shows that while AI makes managers’ messages more polished, heavy reliance can damage trust. Employees tend to accept low-level AI help, such as grammar fixes, but become skeptical when supervisors use AI extensively, especially for personal or motivational messages. This “perception gap” can lead employees to question a manager’s sincerity, integrity, and leadership ability.

Avatar photo

Published

on

By

The use of artificial intelligence (AI) in writing and editing emails has become a common practice among professionals, with over 75% of them utilizing tools like ChatGPT, Gemini, Copilot, or Claude in their daily work. While these generative AI tools can make writing easier, research reveals that relying on them too heavily can undermine trust between managers and employees.

A study conducted by researchers Anthony Coman and Peter Cardon surveyed 1,100 professionals about their perceptions of emails written with low, medium, and high levels of AI assistance. The results showed a “perception gap” in messages written by managers versus those written by employees. When evaluating their own use of AI, participants tended to rate it similarly across different levels of assistance. However, when rating others’ use, the magnitude of AI assistance became important.

The study found that low levels of AI help, such as grammar or editing, were generally acceptable. However, higher levels of assistance triggered negative perceptions, especially among employees who perceived their managers’ reliance on AI-generated content as laziness or a lack of caring. This perception gap had a substantial impact on trust: only 40% to 52% of employees viewed supervisors as sincere when they used high levels of AI, compared to 83% for low-assistance messages.

The findings suggest that managers should carefully consider message type, level of AI assistance, and relational context before using AI in their writing. While AI may be suitable for informational or routine communications, relationship-oriented messages requiring empathy, praise, congratulations, motivation, or personal feedback are better handled with minimal technological intervention.

In essence, the quiet threat to trust posed by overreliance on AI emails is a reminder that while technology can enhance productivity and efficiency, it cannot replace human touch and emotional intelligence in workplace relationships.

Continue Reading

Cancer

Safer Non-Stick Coatings: Scientists Develop Alternative to Teflon

Scientists at the University of Toronto have developed a new non-stick material that rivals the performance of traditional PFAS-based coatings while using only minimal amounts of these controversial “forever chemicals.” Through an inventive process called “nanoscale fletching,” they modified silicone-based polymers to repel both water and oil effectively. This breakthrough could pave the way for safer cookware, fabrics, and other products without the environmental and health risks linked to long-chain PFAS.

Avatar photo

Published

on

The scientific community has been working towards developing safer alternatives to per- and polyfluoroalkyl substances (PFAS), a family of chemicals commonly used in non-stick coatings. Researchers at the University of Toronto Engineering have made significant progress in this area by creating a new material that repels both water and grease about as well as standard PFAS-based coatings, but with much lower amounts of these chemicals.

Professor Kevin Golovin and his team have been working on developing alternative materials to replace Teflon, which has been used for decades due to its non-stick properties. However, the chemical inertness that makes Teflon so effective also causes it to persist in the environment and accumulate in biological tissues, leading to health concerns.

The researchers’ solution is a material called polydimethylsiloxane (PDMS), often sold as silicone. They have developed a new chemistry technique called nanoscale fletching, which bonds short chains of PDMS to a base material, resembling bristles on a brush. To improve the oil-repelling ability, they added the shortest possible PFAS molecule, consisting of a single carbon with three fluorines on it.

When coated on a piece of fabric and tested with various oils, the new coating achieved a grade of 6, placing it on par with many standard PFAS-based coatings. While this may seem like a small improvement, it’s a crucial step towards creating safer alternatives to Teflon and other PFAS-based materials.

The team is now working on further improving their material, aiming to create a substance that outperforms Teflon without using any PFAS at all. This would be a significant breakthrough in the field, paving the way for the development of even safer non-stick coatings for consumer products.

In conclusion, scientists have made significant progress in developing a safer alternative to Teflon and other PFAS-based materials. The new material has shown promising results, and further research is needed to improve its performance and scalability. As we move forward, it’s essential to prioritize the development of safe and sustainable technologies that minimize harm to both humans and the environment.

Continue Reading

Artificial Intelligence

Google’s Deepfake Hunter: Exposing Manipulated Videos with a Universal Detector

AI-generated videos are becoming dangerously convincing and UC Riverside researchers have teamed up with Google to fight back. Their new system, UNITE, can detect deepfakes even when faces aren’t visible, going beyond traditional methods by scanning backgrounds, motion, and subtle cues. As fake content becomes easier to generate and harder to detect, this universal tool might become essential for newsrooms and social media platforms trying to safeguard the truth.

Avatar photo

Published

on

By

In an era where manipulated videos can spread disinformation, bully people, and incite harm, researchers at the University of California, Riverside (UCR), have created a powerful new system to expose these fakes. Amit Roy-Chowdhury, a professor of electrical and computer engineering, and doctoral candidate Rohit Kundu, teamed up with Google scientists to develop an artificial intelligence model that detects video tampering – even when manipulations go far beyond face swaps and altered speech.

Their new system, called the Universal Network for Identifying Tampered and synthEtic videos (UNITE), detects forgeries by examining not just faces but full video frames, including backgrounds and motion patterns. This analysis makes it one of the first tools capable of identifying synthetic or doctored videos that do not rely on facial content.

“Deepfakes have evolved,” Kundu said. “They’re not just about face swaps anymore. People are now creating entirely fake videos – from faces to backgrounds – using powerful generative models. Our system is built to catch all of that.”

UNITE’s development comes as text-to-video and image-to-video generation have become widely available online. These AI platforms enable virtually anyone to fabricate highly convincing videos, posing serious risks to individuals, institutions, and democracy itself.

“It’s scary how accessible these tools have become,” Kundu said. “Anyone with moderate skills can bypass safety filters and generate realistic videos of public figures saying things they never said.”

Kundu explained that earlier deepfake detectors focused almost entirely on face cues. If there’s no face in the frame, many detectors simply don’t work. But disinformation can come in many forms. Altering a scene’s background can distort the truth just as easily.

To address this, UNITE uses a transformer-based deep learning model to analyze video clips. It detects subtle spatial and temporal inconsistencies – cues often missed by previous systems. The model draws on a foundational AI framework known as SigLIP, which extracts features not bound to a specific person or object. A novel training method, dubbed “attention-diversity loss,” prompts the system to monitor multiple visual regions in each frame, preventing it from focusing solely on faces.

The result is a universal detector capable of flagging a range of forgeries – from simple facial swaps to complex, fully synthetic videos generated without any real footage. It’s one model that handles all these scenarios,” Kundu said. “That’s what makes it universal.”

The researchers presented their findings at the high-ranking 2025 Conference on Computer Vision and Pattern Recognition (CVPR) in Nashville, Tenn. Their paper, led by Kundu, outlines UNITE’s architecture and training methodology.

While still in development, UNITE could soon play a vital role in defending against video disinformation. Potential users include social media platforms, fact-checkers, and newsrooms working to prevent manipulated videos from going viral.

“People deserve to know whether what they’re seeing is real,” Kundu said. “And as AI gets better at faking reality, we have to get better at revealing the truth.”

Continue Reading

Trending