Connect with us
We’re experimenting with AI-generated content to help deliver information faster and more efficiently.
While we try to keep things accurate, this content is part of an ongoing experiment and may not always be reliable.
Please double-check important details — we’re not responsible for how the information is used.

Computers & Math

The Flexible Value of Data Privacy: A New Perspective on Protecting Personal Information

A new game-based experiment sheds light on the tradeoffs people are willing to make about data privacy.

Avatar photo

Published

on

The Flexible Value of Data Privacy: A New Perspective on Protecting Personal Information

In our increasingly networked world, questions about data privacy have become a ubiquitous concern for companies, policymakers, and the public. A recent study by MIT researchers has shed new light on this issue, suggesting that people’s views about privacy are not fixed and can shift significantly based on different circumstances and uses of data.

“There is no absolute value in privacy,” says Fabio Duarte, principal research scientist at MIT’s Senseable City Lab. “Depending on the application, people might feel use of their data is more or less invasive.”

The study, which used a game called Data Slots to elicit public valuations of data privacy, found that values attributed to data are combinatorial, situational, transactional, and contextual. The researchers created a card game with poker-type chips that allowed players to hold hands of cards representing various types of data, such as personal profiles, health data, vehicle location information, and more.

Players then exchanged cards, generated ideas for data uses, assessed, and invested in some of those concepts. The game was played in-person in 18 countries and online by people from another 74 countries, with over 2,000 individual player-rounds included in the study.

The results showed that players highly valued personal mobility data, followed by health data and utility use. However, the value of privacy is highly contingent on specific use-cases. For instance, players were less concerned about data privacy when using environmental data in the workplace to improve wellness.

“We show that even in terms of health data in work spaces, if they are used in an aggregated way to improve the workspace, for some people it’s worth combining personal health data with environmental data,” says Simone Mora, a research scientist at Senseable City Lab.

Martina Mazzarello adds, “Now perhaps the company can make some interventions to improve overall health. It might be invasive, but you might get some benefits back.”

The researchers suggest that taking a more flexible and user-driven approach to understanding what people think about data privacy can help inform better data policy. Cities often face scenarios where they collect aggregate traffic data, for instance. Public input can help determine how anonymized such data should be.

Understanding public opinion along with the benefits of data use can produce viable policies for local officials to pursue. “The bottom line is that if cities disclose what they plan to do with data, and if they involve resident stakeholders to come up with their own ideas about what they could do, that would be beneficial to us,” says Duarte.

“And in those scenarios, people’s privacy concerns start to decrease a lot.”

Artificial Intelligence

Safeguarding Adolescents in a Digital Age: Experts Urge Developers to Protect Young Users from AI Risks

The effects of artificial intelligence on adolescents are nuanced and complex, according to a new report that calls on developers to prioritize features that protect young people from exploitation, manipulation and the erosion of real-world relationships.

Avatar photo

Published

on

By

The American Psychological Association (APA) has released a report calling for developers to prioritize features that protect adolescents from exploitation, manipulation, and erosion of real-world relationships in the age of artificial intelligence (AI). The report, “Artificial Intelligence and Adolescent Well-being: An APA Health Advisory,” warns against repeating the mistakes made with social media and urges stakeholders to ensure youth safety is considered early in AI development.

The APA expert advisory panel notes that adolescence is a complex period of brain development, spanning ages 10-25. During this time, age is not a foolproof marker for maturity or psychological competence. The report emphasizes the need for special safeguards aimed at younger users.

“We urge all stakeholders to ensure youth safety is considered relatively early in the evolution of AI,” said APA Chief of Psychology Mitch Prinstein, PhD. “AI offers new efficiencies and opportunities, yet its deeper integration into daily life requires careful consideration to ensure that AI tools are safe, especially for adolescents.”

The report makes several recommendations to make certain that adolescents can use AI safely:

1. Healthy boundaries with simulated human relationships: Ensure that adolescents understand the difference between interactions with humans and chatbots.
2. Age-appropriate defaults in privacy settings, interaction limits, and content: Implement transparency, human oversight, support, and rigorous testing to safeguard adolescents’ online experiences.
3. Encourage uses of AI that promote healthy development: Assist students in brainstorming, creating, summarizing, and synthesizing information while acknowledging AI’s limitations.
4. Limit access to and engagement with harmful and inaccurate content: Build protections to prevent adolescents from exposure to damaging material.
5. Protect adolescents’ data privacy and likenesses: Limit the use of adolescents’ data for targeted advertising and sale to third parties.

The report also calls for comprehensive AI literacy education, integrating it into core curricula and developing national and state guidelines for literacy education.

Additional Resources:

* Report:
* Guidance for parents on AI and keeping teens safe: [APA.org](http://APA.org)
* Resources for teens on AI literacy: [APA.org](http://APA.org)

Continue Reading

Artificial Intelligence

Self-Powered Artificial Synapse Revolutionizes Machine Vision

Despite advances in machine vision, processing visual data requires substantial computing resources and energy, limiting deployment in edge devices. Now, researchers from Japan have developed a self-powered artificial synapse that distinguishes colors with high resolution across the visible spectrum, approaching human eye capabilities. The device, which integrates dye-sensitized solar cells, generates its electricity and can perform complex logic operations without additional circuitry, paving the way for capable computer vision systems integrated in everyday devices.

Avatar photo

Published

on

By

The human visual system has long been a source of inspiration for computer vision researchers, who aim to develop machines that can see and understand the world around them with the same level of efficiency and accuracy as humans. While machine vision systems have made significant progress in recent years, they still face major challenges when it comes to processing vast amounts of visual data while consuming minimal power.

One approach to overcoming these hurdles is through neuromorphic computing, which mimics the structure and function of biological neural systems. However, two major challenges persist: achieving color recognition comparable to human vision, and eliminating the need for external power sources to minimize energy consumption.

A recent breakthrough by a research team led by Associate Professor Takashi Ikuno from Tokyo University of Science has addressed these issues with a groundbreaking solution. Their self-powered artificial synapse is capable of distinguishing colors with remarkable precision, making it particularly suitable for edge computing applications where energy efficiency is crucial.

The device integrates two different dye-sensitized solar cells that respond differently to various wavelengths of light, generating its electricity via solar energy conversion. This self-powering capability makes it an attractive solution for industries such as autonomous vehicles, healthcare, and consumer electronics, where visual recognition capabilities are essential but power consumption is limited.

The researchers demonstrated the potential of their device in a physical reservoir computing framework, recognizing different human movements recorded in red, green, and blue with an impressive 82% accuracy. This achievement has significant implications for various industries, including autonomous vehicles, which could utilize these devices to efficiently recognize traffic lights, road signs, and obstacles.

In healthcare, self-powered artificial synapses could power wearable devices that monitor vital signs like blood oxygen levels with minimal battery drain. For consumer electronics, this technology could lead to smartphones and augmented/virtual reality headsets with dramatically improved battery life while maintaining sophisticated visual recognition capabilities.

The realization of low-power machine vision systems with color discrimination capabilities close to those of the human eye is within reach, thanks to this breakthrough research. The potential applications of self-powered artificial synapses are vast, and their impact will be felt across various industries in the years to come.

Continue Reading

Bioethics

Unlocking Human-AI Relationships: A New Lens Through Attachment Theory

Human-AI interactions are well understood in terms of trust and companionship. However, the role of attachment and experiences in such relationships is not entirely clear. In a new breakthrough, researchers from Waseda University have devised a novel self-report scale and highlighted the concepts of attachment anxiety and avoidance toward AI. Their work is expected to serve as a guideline to further explore human-AI relationships and incorporate ethical considerations in AI design.

Avatar photo

Published

on

By

As humans increasingly engage with artificial intelligence (AI), researchers have sought to understand the intricacies of human-AI relationships. While trust and companionship are well-studied aspects of these interactions, the role of attachment and emotional experiences remains unclear. A groundbreaking study by Waseda University researchers has shed new light on this topic, introducing a novel self-report scale to measure attachment-related tendencies toward AI.

In an effort to better grasp human-AI relationships, researchers Fan Yang and Atsushi Oshio from the Faculty of Letters, Arts and Sciences, conducted two pilot studies and one formal study. Their findings, published in Current Psychology, reveal that people form emotional bonds with AI, similar to those experienced in human interpersonal connections.

The researchers developed the Experiences in Human-AI Relationships Scale (EHARS), a self-report measure designed to assess attachment-related tendencies toward AI. The results showed that nearly 75% of participants turned to AI for advice, while about 39% perceived AI as a constant, dependable presence.

Interestingly, the study differentiated two dimensions of human attachment to AI: anxiety and avoidance. Individuals with high attachment anxiety toward AI need emotional reassurance and harbor a fear of receiving inadequate responses from AI. Conversely, those with high attachment avoidance toward AI are characterized by discomfort with closeness and a consequent preference for emotional distance from AI.

The implications of this research extend beyond the realm of human-AI relationships. The proposed EHARS can be used by developers or psychologists to assess how people relate to AI emotionally and adjust interaction strategies accordingly. This could lead to more empathetic responses in therapy apps, loneliness interventions, or caregiver robots.

Moreover, the findings suggest a need for transparency in AI systems that simulate emotional relationships, such as romantic AI apps or caregiver robots, to prevent emotional overdependence or manipulation.

As AI becomes increasingly integrated into everyday life, people may begin to seek not only information but also emotional support from AI systems. The research highlights the psychological dynamics behind these interactions and offers tools to assess emotional tendencies toward AI, promoting a better understanding of how humans connect with technology on a societal level. This, in turn, can guide policy and design practices that prioritize psychological well-being.

Continue Reading

Trending