Connect with us
We’re experimenting with AI-generated content to help deliver information faster and more efficiently.
While we try to keep things accurate, this content is part of an ongoing experiment and may not always be reliable.
Please double-check important details — we’re not responsible for how the information is used.

Education and Employment

The Sticky Situation: Understanding the Impact of Slip on Baseball Performance

In 2021, Major League Baseball banned the usage of resin, and since batting averages have gone up. A group of researchers set out to reveal the science behind this.

Avatar photo

Published

on

The world of professional baseball has long been aware of the importance of grip in pitching. However, a recent study by researchers from Japan has shed new light on the impact of slip between fingertips and the ball on pitching performance. Prior to June 3, 2021, Major League Baseball (MLB) pitchers had taken advantage of unapproved substances, like pine resin, to create a sticky situation that helped them maintain a precise grip. But what happens when the stickiness is removed?

A team of researchers from Tohoku University’s Graduate School of Engineering set out to understand this phenomenon. Using high-speed cameras, they captured six experienced pitchers throwing fastballs at approximately 130 kilometers per hour and analyzed how different baseball treatments impacted the finger-ball slip distance. The slip distance refers to the distance the fingers slip on the surface of the ball as it is wound and released.

The researchers found that the stickier the surface of the ball, the less the fingers slipped. This resulted in faster pitches with more revolutions per minute (RPM) and more directional control. In fact, when coated with rosin powder or pine resin, the slip distance was reduced by more than half to approximately 8 millimeters on average.

However, the study also revealed an unexpected result: when pitching water-treated balls, the velocity of the pitches dropped significantly compared to other conditions. This is thought to be due to the pitcher’s perception of fingertip slippage and subsequent adjustments in their pitching action.

The researchers’ findings are expected to enhance our understanding of the ball release mechanism under varying friction conditions, contributing to improved pitching performance, injury prevention for pitchers, and the development of better equipment.

In the future, the team plans to investigate changes in pitching movement resulting from different conditions through analysis of whole-body movements and muscle activity. They also aim to identify pitching techniques that maintain performance with slippery balls while reducing the risk of injury.

This study has significant implications for professional baseball and highlights the importance of understanding the physics behind the game. By optimizing grip and minimizing slip, pitchers can potentially improve their performance and reduce the risk of injury.

Computer Graphics

The Quiet Threat to Trust: How Overreliance on AI Emails Can Harm Workplace Relationships

AI is now a routine part of workplace communication, with most professionals using tools like ChatGPT and Gemini. A study of over 1,000 professionals shows that while AI makes managers’ messages more polished, heavy reliance can damage trust. Employees tend to accept low-level AI help, such as grammar fixes, but become skeptical when supervisors use AI extensively, especially for personal or motivational messages. This “perception gap” can lead employees to question a manager’s sincerity, integrity, and leadership ability.

Avatar photo

Published

on

By

The use of artificial intelligence (AI) in writing and editing emails has become a common practice among professionals, with over 75% of them utilizing tools like ChatGPT, Gemini, Copilot, or Claude in their daily work. While these generative AI tools can make writing easier, research reveals that relying on them too heavily can undermine trust between managers and employees.

A study conducted by researchers Anthony Coman and Peter Cardon surveyed 1,100 professionals about their perceptions of emails written with low, medium, and high levels of AI assistance. The results showed a “perception gap” in messages written by managers versus those written by employees. When evaluating their own use of AI, participants tended to rate it similarly across different levels of assistance. However, when rating others’ use, the magnitude of AI assistance became important.

The study found that low levels of AI help, such as grammar or editing, were generally acceptable. However, higher levels of assistance triggered negative perceptions, especially among employees who perceived their managers’ reliance on AI-generated content as laziness or a lack of caring. This perception gap had a substantial impact on trust: only 40% to 52% of employees viewed supervisors as sincere when they used high levels of AI, compared to 83% for low-assistance messages.

The findings suggest that managers should carefully consider message type, level of AI assistance, and relational context before using AI in their writing. While AI may be suitable for informational or routine communications, relationship-oriented messages requiring empathy, praise, congratulations, motivation, or personal feedback are better handled with minimal technological intervention.

In essence, the quiet threat to trust posed by overreliance on AI emails is a reminder that while technology can enhance productivity and efficiency, it cannot replace human touch and emotional intelligence in workplace relationships.

Continue Reading

Cancer

Safer Non-Stick Coatings: Scientists Develop Alternative to Teflon

Scientists at the University of Toronto have developed a new non-stick material that rivals the performance of traditional PFAS-based coatings while using only minimal amounts of these controversial “forever chemicals.” Through an inventive process called “nanoscale fletching,” they modified silicone-based polymers to repel both water and oil effectively. This breakthrough could pave the way for safer cookware, fabrics, and other products without the environmental and health risks linked to long-chain PFAS.

Avatar photo

Published

on

The scientific community has been working towards developing safer alternatives to per- and polyfluoroalkyl substances (PFAS), a family of chemicals commonly used in non-stick coatings. Researchers at the University of Toronto Engineering have made significant progress in this area by creating a new material that repels both water and grease about as well as standard PFAS-based coatings, but with much lower amounts of these chemicals.

Professor Kevin Golovin and his team have been working on developing alternative materials to replace Teflon, which has been used for decades due to its non-stick properties. However, the chemical inertness that makes Teflon so effective also causes it to persist in the environment and accumulate in biological tissues, leading to health concerns.

The researchers’ solution is a material called polydimethylsiloxane (PDMS), often sold as silicone. They have developed a new chemistry technique called nanoscale fletching, which bonds short chains of PDMS to a base material, resembling bristles on a brush. To improve the oil-repelling ability, they added the shortest possible PFAS molecule, consisting of a single carbon with three fluorines on it.

When coated on a piece of fabric and tested with various oils, the new coating achieved a grade of 6, placing it on par with many standard PFAS-based coatings. While this may seem like a small improvement, it’s a crucial step towards creating safer alternatives to Teflon and other PFAS-based materials.

The team is now working on further improving their material, aiming to create a substance that outperforms Teflon without using any PFAS at all. This would be a significant breakthrough in the field, paving the way for the development of even safer non-stick coatings for consumer products.

In conclusion, scientists have made significant progress in developing a safer alternative to Teflon and other PFAS-based materials. The new material has shown promising results, and further research is needed to improve its performance and scalability. As we move forward, it’s essential to prioritize the development of safe and sustainable technologies that minimize harm to both humans and the environment.

Continue Reading

Artificial Intelligence

Google’s Deepfake Hunter: Exposing Manipulated Videos with a Universal Detector

AI-generated videos are becoming dangerously convincing and UC Riverside researchers have teamed up with Google to fight back. Their new system, UNITE, can detect deepfakes even when faces aren’t visible, going beyond traditional methods by scanning backgrounds, motion, and subtle cues. As fake content becomes easier to generate and harder to detect, this universal tool might become essential for newsrooms and social media platforms trying to safeguard the truth.

Avatar photo

Published

on

By

In an era where manipulated videos can spread disinformation, bully people, and incite harm, researchers at the University of California, Riverside (UCR), have created a powerful new system to expose these fakes. Amit Roy-Chowdhury, a professor of electrical and computer engineering, and doctoral candidate Rohit Kundu, teamed up with Google scientists to develop an artificial intelligence model that detects video tampering – even when manipulations go far beyond face swaps and altered speech.

Their new system, called the Universal Network for Identifying Tampered and synthEtic videos (UNITE), detects forgeries by examining not just faces but full video frames, including backgrounds and motion patterns. This analysis makes it one of the first tools capable of identifying synthetic or doctored videos that do not rely on facial content.

“Deepfakes have evolved,” Kundu said. “They’re not just about face swaps anymore. People are now creating entirely fake videos – from faces to backgrounds – using powerful generative models. Our system is built to catch all of that.”

UNITE’s development comes as text-to-video and image-to-video generation have become widely available online. These AI platforms enable virtually anyone to fabricate highly convincing videos, posing serious risks to individuals, institutions, and democracy itself.

“It’s scary how accessible these tools have become,” Kundu said. “Anyone with moderate skills can bypass safety filters and generate realistic videos of public figures saying things they never said.”

Kundu explained that earlier deepfake detectors focused almost entirely on face cues. If there’s no face in the frame, many detectors simply don’t work. But disinformation can come in many forms. Altering a scene’s background can distort the truth just as easily.

To address this, UNITE uses a transformer-based deep learning model to analyze video clips. It detects subtle spatial and temporal inconsistencies – cues often missed by previous systems. The model draws on a foundational AI framework known as SigLIP, which extracts features not bound to a specific person or object. A novel training method, dubbed “attention-diversity loss,” prompts the system to monitor multiple visual regions in each frame, preventing it from focusing solely on faces.

The result is a universal detector capable of flagging a range of forgeries – from simple facial swaps to complex, fully synthetic videos generated without any real footage. It’s one model that handles all these scenarios,” Kundu said. “That’s what makes it universal.”

The researchers presented their findings at the high-ranking 2025 Conference on Computer Vision and Pattern Recognition (CVPR) in Nashville, Tenn. Their paper, led by Kundu, outlines UNITE’s architecture and training methodology.

While still in development, UNITE could soon play a vital role in defending against video disinformation. Potential users include social media platforms, fact-checkers, and newsrooms working to prevent manipulated videos from going viral.

“People deserve to know whether what they’re seeing is real,” Kundu said. “And as AI gets better at faking reality, we have to get better at revealing the truth.”

Continue Reading

Trending