Connect with us
We’re experimenting with AI-generated content to help deliver information faster and more efficiently.
While we try to keep things accurate, this content is part of an ongoing experiment and may not always be reliable.
Please double-check important details — we’re not responsible for how the information is used.

Computers & Math

The Flexible Value of Data Privacy: A New Perspective on Protecting Personal Information

A new game-based experiment sheds light on the tradeoffs people are willing to make about data privacy.

Avatar photo

Published

on

The Flexible Value of Data Privacy: A New Perspective on Protecting Personal Information

In our increasingly networked world, questions about data privacy have become a ubiquitous concern for companies, policymakers, and the public. A recent study by MIT researchers has shed new light on this issue, suggesting that people’s views about privacy are not fixed and can shift significantly based on different circumstances and uses of data.

“There is no absolute value in privacy,” says Fabio Duarte, principal research scientist at MIT’s Senseable City Lab. “Depending on the application, people might feel use of their data is more or less invasive.”

The study, which used a game called Data Slots to elicit public valuations of data privacy, found that values attributed to data are combinatorial, situational, transactional, and contextual. The researchers created a card game with poker-type chips that allowed players to hold hands of cards representing various types of data, such as personal profiles, health data, vehicle location information, and more.

Players then exchanged cards, generated ideas for data uses, assessed, and invested in some of those concepts. The game was played in-person in 18 countries and online by people from another 74 countries, with over 2,000 individual player-rounds included in the study.

The results showed that players highly valued personal mobility data, followed by health data and utility use. However, the value of privacy is highly contingent on specific use-cases. For instance, players were less concerned about data privacy when using environmental data in the workplace to improve wellness.

“We show that even in terms of health data in work spaces, if they are used in an aggregated way to improve the workspace, for some people it’s worth combining personal health data with environmental data,” says Simone Mora, a research scientist at Senseable City Lab.

Martina Mazzarello adds, “Now perhaps the company can make some interventions to improve overall health. It might be invasive, but you might get some benefits back.”

The researchers suggest that taking a more flexible and user-driven approach to understanding what people think about data privacy can help inform better data policy. Cities often face scenarios where they collect aggregate traffic data, for instance. Public input can help determine how anonymized such data should be.

Understanding public opinion along with the benefits of data use can produce viable policies for local officials to pursue. “The bottom line is that if cities disclose what they plan to do with data, and if they involve resident stakeholders to come up with their own ideas about what they could do, that would be beneficial to us,” says Duarte.

“And in those scenarios, people’s privacy concerns start to decrease a lot.”

Artificial Intelligence

Self-Powered Artificial Synapse Revolutionizes Machine Vision

Despite advances in machine vision, processing visual data requires substantial computing resources and energy, limiting deployment in edge devices. Now, researchers from Japan have developed a self-powered artificial synapse that distinguishes colors with high resolution across the visible spectrum, approaching human eye capabilities. The device, which integrates dye-sensitized solar cells, generates its electricity and can perform complex logic operations without additional circuitry, paving the way for capable computer vision systems integrated in everyday devices.

Avatar photo

Published

on

By

The human visual system has long been a source of inspiration for computer vision researchers, who aim to develop machines that can see and understand the world around them with the same level of efficiency and accuracy as humans. While machine vision systems have made significant progress in recent years, they still face major challenges when it comes to processing vast amounts of visual data while consuming minimal power.

One approach to overcoming these hurdles is through neuromorphic computing, which mimics the structure and function of biological neural systems. However, two major challenges persist: achieving color recognition comparable to human vision, and eliminating the need for external power sources to minimize energy consumption.

A recent breakthrough by a research team led by Associate Professor Takashi Ikuno from Tokyo University of Science has addressed these issues with a groundbreaking solution. Their self-powered artificial synapse is capable of distinguishing colors with remarkable precision, making it particularly suitable for edge computing applications where energy efficiency is crucial.

The device integrates two different dye-sensitized solar cells that respond differently to various wavelengths of light, generating its electricity via solar energy conversion. This self-powering capability makes it an attractive solution for industries such as autonomous vehicles, healthcare, and consumer electronics, where visual recognition capabilities are essential but power consumption is limited.

The researchers demonstrated the potential of their device in a physical reservoir computing framework, recognizing different human movements recorded in red, green, and blue with an impressive 82% accuracy. This achievement has significant implications for various industries, including autonomous vehicles, which could utilize these devices to efficiently recognize traffic lights, road signs, and obstacles.

In healthcare, self-powered artificial synapses could power wearable devices that monitor vital signs like blood oxygen levels with minimal battery drain. For consumer electronics, this technology could lead to smartphones and augmented/virtual reality headsets with dramatically improved battery life while maintaining sophisticated visual recognition capabilities.

The realization of low-power machine vision systems with color discrimination capabilities close to those of the human eye is within reach, thanks to this breakthrough research. The potential applications of self-powered artificial synapses are vast, and their impact will be felt across various industries in the years to come.

Continue Reading

Bioethics

Unlocking Human-AI Relationships: A New Lens Through Attachment Theory

Human-AI interactions are well understood in terms of trust and companionship. However, the role of attachment and experiences in such relationships is not entirely clear. In a new breakthrough, researchers from Waseda University have devised a novel self-report scale and highlighted the concepts of attachment anxiety and avoidance toward AI. Their work is expected to serve as a guideline to further explore human-AI relationships and incorporate ethical considerations in AI design.

Avatar photo

Published

on

By

As humans increasingly engage with artificial intelligence (AI), researchers have sought to understand the intricacies of human-AI relationships. While trust and companionship are well-studied aspects of these interactions, the role of attachment and emotional experiences remains unclear. A groundbreaking study by Waseda University researchers has shed new light on this topic, introducing a novel self-report scale to measure attachment-related tendencies toward AI.

In an effort to better grasp human-AI relationships, researchers Fan Yang and Atsushi Oshio from the Faculty of Letters, Arts and Sciences, conducted two pilot studies and one formal study. Their findings, published in Current Psychology, reveal that people form emotional bonds with AI, similar to those experienced in human interpersonal connections.

The researchers developed the Experiences in Human-AI Relationships Scale (EHARS), a self-report measure designed to assess attachment-related tendencies toward AI. The results showed that nearly 75% of participants turned to AI for advice, while about 39% perceived AI as a constant, dependable presence.

Interestingly, the study differentiated two dimensions of human attachment to AI: anxiety and avoidance. Individuals with high attachment anxiety toward AI need emotional reassurance and harbor a fear of receiving inadequate responses from AI. Conversely, those with high attachment avoidance toward AI are characterized by discomfort with closeness and a consequent preference for emotional distance from AI.

The implications of this research extend beyond the realm of human-AI relationships. The proposed EHARS can be used by developers or psychologists to assess how people relate to AI emotionally and adjust interaction strategies accordingly. This could lead to more empathetic responses in therapy apps, loneliness interventions, or caregiver robots.

Moreover, the findings suggest a need for transparency in AI systems that simulate emotional relationships, such as romantic AI apps or caregiver robots, to prevent emotional overdependence or manipulation.

As AI becomes increasingly integrated into everyday life, people may begin to seek not only information but also emotional support from AI systems. The research highlights the psychological dynamics behind these interactions and offers tools to assess emotional tendencies toward AI, promoting a better understanding of how humans connect with technology on a societal level. This, in turn, can guide policy and design practices that prioritize psychological well-being.

Continue Reading

Artificial Intelligence

Harnessing the Power of AI: Why Leashes are Better than Guardrails for Regulation

Many policy discussions on AI safety regulation have focused on the need to establish regulatory ‘guardrails’ to protect the public from the risks of AI technology. Experts now argue that, instead of imposing guardrails, policymakers should demand ‘leashes.’

Avatar photo

Published

on

By

Harnessing the Power of AI: Why Leashes are Better than Guardrails for Regulation

For years, policymakers have debated the best way to regulate Artificial Intelligence (AI) to prevent its potential risks. A new paper by experts Cary Coglianese and Colton R. Crum proposes a game-changing approach: rather than imposing strict “guardrails” to control AI development, they suggest using flexible “leashes.” This management-based regulation would allow firms to innovate while ensuring public safety.

The authors argue that guardrails are not effective for AI due to its rapidly evolving nature and diverse applications. Social media, chatbots, autonomous vehicles, precision medicine, and fintech investment advisors are just a few examples of how AI is transforming industries. While offering numerous benefits, such as improved cancer detection, AI also poses risks like AV collisions, social media-induced suicides, and bias in digital formats.

Coglianese and Crum provide three case studies illustrating the potential harm from unregulated AI:

1. Autonomous vehicle (AV) crashes
2. Social media-related suicides
3. Bias and discrimination through AI-generated content

In each scenario, firms using AI tools would be expected to put their technology on a leash by implementing internal systems to mitigate potential harms. This flexible approach allows for technological innovation while ensuring that companies are accountable for the consequences of their actions.

Management-based regulation offers several advantages over guardrails:

* It can flexibly respond to AI’s novel uses and problems
* It enables technological exploration, discovery, and change
* It provides a tethered structure that helps prevent AI from “running away”

By embracing this leash-like approach, policymakers can harness the power of AI while minimizing its risks.

Continue Reading

Trending