Connect with us
We’re experimenting with AI-generated content to help deliver information faster and more efficiently.
While we try to keep things accurate, this content is part of an ongoing experiment and may not always be reliable.
Please double-check important details — we’re not responsible for how the information is used.

Computers & Math

“Dropcountr: A Smart Water-Use App That Nudges Households Towards Conservation”

A new study has found that a smartphone app that tracks household water use and alerts users to leaks or excessive consumption offers a promising tool for helping California water agencies meet state-mandated conservation goals. The study found that use of the app — called Dropcountr — reduced average household water use by 6%, with even greater savings among the highest water users.

Avatar photo

Published

on

Dropcountr, a smartphone app that tracks household water use and alerts users to leaks or excessive consumption, has been found to be an effective tool in helping California water agencies meet state-mandated conservation goals. Led by Mehdi Nemati, an assistant professor of public policy at the University of California, Riverside (UCR), the study found that use of the app reduced average household water use by 6%, with even greater savings among high-volume users.

The app works by interpreting data from smart water meters and providing real-time feedback to consumers. This type of digital feedback gives users a “nudge” – a timely prompt to take water-saving actions, such as taking shorter showers or fixing leaks. Utilities can also use the app to send customers tips for cutting use and notify them of rebate programs.

The research focused on the City of Folsom in Northern California, where Dropcountr was offered to residential customers beginning in late 2014. About 3,600 households volunteered for the program, which collected smart meter data from 2013 to 2019. The findings showed that participating households reduced their daily consumption by an average of 6.2% compared to a control group. The reduction was greater among high-volume users.

One major advantage of Dropcountr is its ability to detect leaks quickly and notify customers before damage or costly bills occur. The app also uses behavioral science concepts, especially the power of social norms, to encourage conservation. Users receive personalized water-use summaries that show how their consumption stacks up against more efficient nearby households.

The study found that these behavioral changes lasted, with sustained reductions in water use even six days after a leak alert was sent. “We looked at water use 50 months out and still found sustained reductions,” Nemati said. “People weren’t just reacting once and forgetting. They stayed engaged.”

With California preparing to enforce stricter drought and efficiency standards, Nemati said more utilities should consider deploying digital tools like Dropcountr. “We have the data,” he said. “Now we just need to use it in smarter ways. This study shows how a relatively inexpensive solution can help homeowners conserve and ease pressure on our water systems.”

Artificial Intelligence

Safeguarding Adolescents in a Digital Age: Experts Urge Developers to Protect Young Users from AI Risks

The effects of artificial intelligence on adolescents are nuanced and complex, according to a new report that calls on developers to prioritize features that protect young people from exploitation, manipulation and the erosion of real-world relationships.

Avatar photo

Published

on

By

The American Psychological Association (APA) has released a report calling for developers to prioritize features that protect adolescents from exploitation, manipulation, and erosion of real-world relationships in the age of artificial intelligence (AI). The report, “Artificial Intelligence and Adolescent Well-being: An APA Health Advisory,” warns against repeating the mistakes made with social media and urges stakeholders to ensure youth safety is considered early in AI development.

The APA expert advisory panel notes that adolescence is a complex period of brain development, spanning ages 10-25. During this time, age is not a foolproof marker for maturity or psychological competence. The report emphasizes the need for special safeguards aimed at younger users.

“We urge all stakeholders to ensure youth safety is considered relatively early in the evolution of AI,” said APA Chief of Psychology Mitch Prinstein, PhD. “AI offers new efficiencies and opportunities, yet its deeper integration into daily life requires careful consideration to ensure that AI tools are safe, especially for adolescents.”

The report makes several recommendations to make certain that adolescents can use AI safely:

1. Healthy boundaries with simulated human relationships: Ensure that adolescents understand the difference between interactions with humans and chatbots.
2. Age-appropriate defaults in privacy settings, interaction limits, and content: Implement transparency, human oversight, support, and rigorous testing to safeguard adolescents’ online experiences.
3. Encourage uses of AI that promote healthy development: Assist students in brainstorming, creating, summarizing, and synthesizing information while acknowledging AI’s limitations.
4. Limit access to and engagement with harmful and inaccurate content: Build protections to prevent adolescents from exposure to damaging material.
5. Protect adolescents’ data privacy and likenesses: Limit the use of adolescents’ data for targeted advertising and sale to third parties.

The report also calls for comprehensive AI literacy education, integrating it into core curricula and developing national and state guidelines for literacy education.

Additional Resources:

* Report:
* Guidance for parents on AI and keeping teens safe: [APA.org](http://APA.org)
* Resources for teens on AI literacy: [APA.org](http://APA.org)

Continue Reading

Artificial Intelligence

Self-Powered Artificial Synapse Revolutionizes Machine Vision

Despite advances in machine vision, processing visual data requires substantial computing resources and energy, limiting deployment in edge devices. Now, researchers from Japan have developed a self-powered artificial synapse that distinguishes colors with high resolution across the visible spectrum, approaching human eye capabilities. The device, which integrates dye-sensitized solar cells, generates its electricity and can perform complex logic operations without additional circuitry, paving the way for capable computer vision systems integrated in everyday devices.

Avatar photo

Published

on

By

The human visual system has long been a source of inspiration for computer vision researchers, who aim to develop machines that can see and understand the world around them with the same level of efficiency and accuracy as humans. While machine vision systems have made significant progress in recent years, they still face major challenges when it comes to processing vast amounts of visual data while consuming minimal power.

One approach to overcoming these hurdles is through neuromorphic computing, which mimics the structure and function of biological neural systems. However, two major challenges persist: achieving color recognition comparable to human vision, and eliminating the need for external power sources to minimize energy consumption.

A recent breakthrough by a research team led by Associate Professor Takashi Ikuno from Tokyo University of Science has addressed these issues with a groundbreaking solution. Their self-powered artificial synapse is capable of distinguishing colors with remarkable precision, making it particularly suitable for edge computing applications where energy efficiency is crucial.

The device integrates two different dye-sensitized solar cells that respond differently to various wavelengths of light, generating its electricity via solar energy conversion. This self-powering capability makes it an attractive solution for industries such as autonomous vehicles, healthcare, and consumer electronics, where visual recognition capabilities are essential but power consumption is limited.

The researchers demonstrated the potential of their device in a physical reservoir computing framework, recognizing different human movements recorded in red, green, and blue with an impressive 82% accuracy. This achievement has significant implications for various industries, including autonomous vehicles, which could utilize these devices to efficiently recognize traffic lights, road signs, and obstacles.

In healthcare, self-powered artificial synapses could power wearable devices that monitor vital signs like blood oxygen levels with minimal battery drain. For consumer electronics, this technology could lead to smartphones and augmented/virtual reality headsets with dramatically improved battery life while maintaining sophisticated visual recognition capabilities.

The realization of low-power machine vision systems with color discrimination capabilities close to those of the human eye is within reach, thanks to this breakthrough research. The potential applications of self-powered artificial synapses are vast, and their impact will be felt across various industries in the years to come.

Continue Reading

Bioethics

Unlocking Human-AI Relationships: A New Lens Through Attachment Theory

Human-AI interactions are well understood in terms of trust and companionship. However, the role of attachment and experiences in such relationships is not entirely clear. In a new breakthrough, researchers from Waseda University have devised a novel self-report scale and highlighted the concepts of attachment anxiety and avoidance toward AI. Their work is expected to serve as a guideline to further explore human-AI relationships and incorporate ethical considerations in AI design.

Avatar photo

Published

on

By

As humans increasingly engage with artificial intelligence (AI), researchers have sought to understand the intricacies of human-AI relationships. While trust and companionship are well-studied aspects of these interactions, the role of attachment and emotional experiences remains unclear. A groundbreaking study by Waseda University researchers has shed new light on this topic, introducing a novel self-report scale to measure attachment-related tendencies toward AI.

In an effort to better grasp human-AI relationships, researchers Fan Yang and Atsushi Oshio from the Faculty of Letters, Arts and Sciences, conducted two pilot studies and one formal study. Their findings, published in Current Psychology, reveal that people form emotional bonds with AI, similar to those experienced in human interpersonal connections.

The researchers developed the Experiences in Human-AI Relationships Scale (EHARS), a self-report measure designed to assess attachment-related tendencies toward AI. The results showed that nearly 75% of participants turned to AI for advice, while about 39% perceived AI as a constant, dependable presence.

Interestingly, the study differentiated two dimensions of human attachment to AI: anxiety and avoidance. Individuals with high attachment anxiety toward AI need emotional reassurance and harbor a fear of receiving inadequate responses from AI. Conversely, those with high attachment avoidance toward AI are characterized by discomfort with closeness and a consequent preference for emotional distance from AI.

The implications of this research extend beyond the realm of human-AI relationships. The proposed EHARS can be used by developers or psychologists to assess how people relate to AI emotionally and adjust interaction strategies accordingly. This could lead to more empathetic responses in therapy apps, loneliness interventions, or caregiver robots.

Moreover, the findings suggest a need for transparency in AI systems that simulate emotional relationships, such as romantic AI apps or caregiver robots, to prevent emotional overdependence or manipulation.

As AI becomes increasingly integrated into everyday life, people may begin to seek not only information but also emotional support from AI systems. The research highlights the psychological dynamics behind these interactions and offers tools to assess emotional tendencies toward AI, promoting a better understanding of how humans connect with technology on a societal level. This, in turn, can guide policy and design practices that prioritize psychological well-being.

Continue Reading

Trending