Connect with us
We’re experimenting with AI-generated content to help deliver information faster and more efficiently.
While we try to keep things accurate, this content is part of an ongoing experiment and may not always be reliable.
Please double-check important details — we’re not responsible for how the information is used.

Artificial Intelligence

Google’s Deepfake Hunter: Exposing Manipulated Videos with a Universal Detector

AI-generated videos are becoming dangerously convincing and UC Riverside researchers have teamed up with Google to fight back. Their new system, UNITE, can detect deepfakes even when faces aren’t visible, going beyond traditional methods by scanning backgrounds, motion, and subtle cues. As fake content becomes easier to generate and harder to detect, this universal tool might become essential for newsrooms and social media platforms trying to safeguard the truth.

Avatar photo

Published

on

In an era where manipulated videos can spread disinformation, bully people, and incite harm, researchers at the University of California, Riverside (UCR), have created a powerful new system to expose these fakes. Amit Roy-Chowdhury, a professor of electrical and computer engineering, and doctoral candidate Rohit Kundu, teamed up with Google scientists to develop an artificial intelligence model that detects video tampering – even when manipulations go far beyond face swaps and altered speech.

Their new system, called the Universal Network for Identifying Tampered and synthEtic videos (UNITE), detects forgeries by examining not just faces but full video frames, including backgrounds and motion patterns. This analysis makes it one of the first tools capable of identifying synthetic or doctored videos that do not rely on facial content.

“Deepfakes have evolved,” Kundu said. “They’re not just about face swaps anymore. People are now creating entirely fake videos – from faces to backgrounds – using powerful generative models. Our system is built to catch all of that.”

UNITE’s development comes as text-to-video and image-to-video generation have become widely available online. These AI platforms enable virtually anyone to fabricate highly convincing videos, posing serious risks to individuals, institutions, and democracy itself.

“It’s scary how accessible these tools have become,” Kundu said. “Anyone with moderate skills can bypass safety filters and generate realistic videos of public figures saying things they never said.”

Kundu explained that earlier deepfake detectors focused almost entirely on face cues. If there’s no face in the frame, many detectors simply don’t work. But disinformation can come in many forms. Altering a scene’s background can distort the truth just as easily.

To address this, UNITE uses a transformer-based deep learning model to analyze video clips. It detects subtle spatial and temporal inconsistencies – cues often missed by previous systems. The model draws on a foundational AI framework known as SigLIP, which extracts features not bound to a specific person or object. A novel training method, dubbed “attention-diversity loss,” prompts the system to monitor multiple visual regions in each frame, preventing it from focusing solely on faces.

The result is a universal detector capable of flagging a range of forgeries – from simple facial swaps to complex, fully synthetic videos generated without any real footage. It’s one model that handles all these scenarios,” Kundu said. “That’s what makes it universal.”

The researchers presented their findings at the high-ranking 2025 Conference on Computer Vision and Pattern Recognition (CVPR) in Nashville, Tenn. Their paper, led by Kundu, outlines UNITE’s architecture and training methodology.

While still in development, UNITE could soon play a vital role in defending against video disinformation. Potential users include social media platforms, fact-checkers, and newsrooms working to prevent manipulated videos from going viral.

“People deserve to know whether what they’re seeing is real,” Kundu said. “And as AI gets better at faking reality, we have to get better at revealing the truth.”

Artificial Intelligence

Revolutionizing Quantum Computing with an Ultra-Thin Chip

Researchers at Harvard have created a groundbreaking metasurface that can replace bulky and complex optical components used in quantum computing with a single, ultra-thin, nanostructured layer. This innovation could make quantum networks far more scalable, stable, and compact. By harnessing the power of graph theory, the team simplified the design of these quantum metasurfaces, enabling them to generate entangled photons and perform sophisticated quantum operations — all on a chip thinner than a human hair. It’s a radical leap forward for room-temperature quantum technology and photonics.

Avatar photo

Published

on

By

In the quest for practical quantum computers and networks, photons have emerged as promising carriers of information at room temperature. However, controlling and coherently manipulating these particles within optical devices has proven notoriously difficult due to their inherently noisy nature. To overcome this hurdle, researchers from Harvard’s John A. Paulson School of Engineering and Applied Sciences have developed an innovative solution – a metasurface-based quantum photonics processor.

This groundbreaking device is the result of Federico Capasso’s research team, led by graduate student Kerolos M.A. Yousef. By harnessing the power of specially designed metasurfaces, flat devices etched with nanoscale light-manipulating patterns, they have created an ultra-thin upgrade for quantum-optical chips and setups.

One of the primary advantages of this design is its ability to miniaturize an entire optical setup into a single metasurface. This results in a robust and scalable system that offers numerous benefits, including cost-effectiveness, simplicity of fabrication, and low optical loss. The work has significant implications for quantum sensing, enabling “lab-on-a-chip” capabilities for fundamental science.

To tackle the complex mathematical challenges associated with this design, the researchers drew upon graph theory – a branch of mathematics that uses points and lines to represent connections and relationships. This allowed them to visually determine how photons interfere with each other and predict their effects in experiments.

The resulting paper was a collaboration with Marko Loncar’s lab, which provided expertise and equipment necessary for the project. Neal Sinclair, a research scientist on the team, expressed excitement about this approach, stating that it could efficiently scale optical quantum computers and networks – their biggest challenge compared to other platforms like superconductors or atoms.

This groundbreaking research received support from federal sources, including the Air Force Office of Scientific Research (AFOSR), under award No. FA9550-21-1-0312. The work was performed at the Harvard University Center for Nanoscale Systems.

Continue Reading

Artificial Intelligence

Revolutionizing Electronics: Tiny Metal Switches Magnetism without Magnets, Enabling Faster, More Energy-Efficient Technology

Researchers at the University of Minnesota Twin Cities have made a promising breakthrough in memory technology by using a nickel-tungsten alloy called Ni₄W. This material shows powerful magnetic control properties that can significantly reduce energy use in electronic devices. Unlike conventional materials, Ni₄W allows for “field-free” switching—meaning it can flip magnetic states without external magnets—paving the way for faster, more efficient computer memory and logic devices. It’s also cheap to produce, making it ideal for widespread use in gadgets from phones to data centers.

Avatar photo

Published

on

By

Here is the rewritten article:

The University of Minnesota Twin Cities has made significant research breakthroughs in developing a material that could revolutionize the world of electronics. A study published in Advanced Materials, a peer-reviewed scientific journal, reveals a new understanding of Ni₄W, a combination of nickel and tungsten that produces powerful spin-orbit torque (SOT). This technology has the potential to make computer memory faster and more energy-efficient.

As technology continues to advance, the demand for emerging memory solutions is growing. Researchers are seeking alternatives and complements to existing memory technologies that can perform at high levels with low energy consumption. Ni₄W offers a promising solution, demonstrating a more efficient way to control magnetization in tiny electronic devices.

“Ni₄W reduces power usage for writing data, potentially cutting energy use in electronics significantly,” said Jian-Ping Wang, senior author on the paper and Distinguished McKnight Professor at the University of Minnesota Twin Cities. This technology could help reduce the electricity consumption of devices like smartphones and data centers, making future electronics both smarter and more sustainable.

The researchers found that Ni₄W can generate spin currents in multiple directions, enabling “field-free” switching of magnetic states without the need for external magnetic fields. Yifei Yang, a fifth-year Ph.D. student and co-first author on the paper, noted that they observed high SOT efficiency with multi-direction in Ni₄W both on its own and when layered with tungsten.

Ni₄W is made from common metals and can be manufactured using standard industrial processes, making it an attractive option for industry partners. The researchers are excited about the potential of this technology to be implemented into everyday devices like smart watches, phones, and more.

In addition to Wang and Yang, the research team included Seungjun Lee, a postdoctoral fellow and co-first author on the paper, along with several other experts from various departments at the University of Minnesota. This work was supported by SMART (Spintronic Materials for Advanced InforRmation Technologies) and the Global Research Collaboration Logic and Memory program.

The next steps are to grow these materials into a device that is even smaller than their previous work. With continued research, Ni₄W has the potential to revolutionize the world of electronics, enabling faster, more energy-efficient technology for years to come.

Continue Reading

Artificial Intelligence

Scientists Uncover the Secret to AI’s Language Understanding: A Phase Transition in Neural Networks

Neural networks first treat sentences like puzzles solved by word order, but once they read enough, a tipping point sends them diving into word meaning instead—an abrupt “phase transition” reminiscent of water flashing into steam. By revealing this hidden switch, researchers open a window into how transformer models such as ChatGPT grow smarter and hint at new ways to make them leaner, safer, and more predictable.

Avatar photo

Published

on

By

The ability of artificial intelligence systems to engage in natural conversations is a remarkable feat. However, despite this progress, the internal processes that lead to such results remain largely unknown. A recent study published in the Journal of Statistical Mechanics: Theory and Experiment (JSTAT) has shed light on this mystery. The research reveals that when small amounts of data are used for training, neural networks initially rely on the position of words in a sentence. However, as the system is exposed to enough data, it transitions to a new strategy based on the meaning of the words.

This transition occurs abruptly, once a critical data threshold is crossed – much like a phase transition in physical systems. The findings offer valuable insights into understanding the workings of these models. Just as a child learning to read starts by understanding sentences based on the positions of words, a neural network begins its journey by relying on word positions. However, as it continues to learn and train, the network “keeps going to school” and develops a deeper understanding of word meanings.

This shift is a critical discovery in the field of artificial intelligence. The researchers used a simplified model of self-attention mechanism – a core building block of transformer language models. These models are designed to process sequences of data, such as text, and form the backbone of many modern language systems.

The study’s lead author, Hugo Cui, explains that the network can use two strategies: one based on word positions and another on word meanings. Initially, the network relies on word positions, but once a certain threshold is crossed, it abruptly shifts to relying on meaning-based strategies. This transition is likened to a phase transition in physical systems, where the system undergoes a sudden, drastic change.

Understanding this phenomenon from a theoretical viewpoint is essential. The researchers emphasize that their findings can provide valuable insights into making neural networks more efficient and safer to use. The study’s results are published in JSTAT as part of the Machine Learning 2025 special issue and included in the proceedings of the NeurIPS 2024 conference.

The research by Cui, Behrens, Krzakala, and Zdeborová, titled “A Phase Transition between Positional and Semantic Learning in a Solvable Model of Dot-Product Attention,” offers new knowledge that can be used to improve the performance and safety of artificial intelligence systems. The study’s findings have significant implications for the development of more efficient and effective language models, ultimately leading to advancements in natural language processing and understanding.

Continue Reading

Trending