Connect with us
We’re experimenting with AI-generated content to help deliver information faster and more efficiently.
While we try to keep things accurate, this content is part of an ongoing experiment and may not always be reliable.
Please double-check important details — we’re not responsible for how the information is used.

Computers & Math

“Hearing the Heartbeat of a City: How AI Can Capture the Emotional Pulse of Urban Life”

Researchers took a fresh approach to urban research by using artificial intelligence to explore the emotional side of city life. Their goal was to better understand the link between a city’s physical features and how people feel in those environments.

Avatar photo

Published

on

“Imagine being able to hear the heartbeat of a city – not just its physical structures and infrastructure, but the emotional pulse that connects its people. For Jayedi Aman, an assistant professor of architectural studies at the University of Missouri, this is exactly what AI can help us achieve.

Aman, along with Tim Matisziw, a professor of geography and engineering at Mizzou, have been using artificial intelligence to explore the emotional side of city life in a recent study. By training an AI tool on public Instagram posts with location tags, they were able to analyze the emotional tone of the images and text of the posts, identifying whether people were happy, frustrated or relaxed.

Using Google Street View and another AI tool, the researchers then linked these emotional responses to the physical features of the places where people posted from. This allowed them to create a digital ‘sentiment map’ that shows what people are feeling across a city – a powerful new tool for city leaders to understand the emotional well-being of their communities.

Imagine being able to identify areas where people feel safe or unsafe, plan services and emergency responses based on real-time data, or even check in on public well-being after disasters. This is exactly what Aman and Matisziw’s AI-powered method can provide – a more nuanced understanding of city life that goes beyond traditional surveys.

“We envision a future where data on how people feel becomes a core part of city dashboards,” Aman said. “This opens the door to designing cities that not only work well but also feel right to the people who live in them.”

The potential applications of this technology are vast, and it’s not hard to see why Aman and Matisziw are excited about its possibilities. By giving city leaders a deeper understanding of their communities’ emotional pulse, AI can help create more livable, lovable cities that prioritize people over physical infrastructure.”

Artificial Intelligence

Safeguarding Adolescents in a Digital Age: Experts Urge Developers to Protect Young Users from AI Risks

The effects of artificial intelligence on adolescents are nuanced and complex, according to a new report that calls on developers to prioritize features that protect young people from exploitation, manipulation and the erosion of real-world relationships.

Avatar photo

Published

on

By

The American Psychological Association (APA) has released a report calling for developers to prioritize features that protect adolescents from exploitation, manipulation, and erosion of real-world relationships in the age of artificial intelligence (AI). The report, “Artificial Intelligence and Adolescent Well-being: An APA Health Advisory,” warns against repeating the mistakes made with social media and urges stakeholders to ensure youth safety is considered early in AI development.

The APA expert advisory panel notes that adolescence is a complex period of brain development, spanning ages 10-25. During this time, age is not a foolproof marker for maturity or psychological competence. The report emphasizes the need for special safeguards aimed at younger users.

“We urge all stakeholders to ensure youth safety is considered relatively early in the evolution of AI,” said APA Chief of Psychology Mitch Prinstein, PhD. “AI offers new efficiencies and opportunities, yet its deeper integration into daily life requires careful consideration to ensure that AI tools are safe, especially for adolescents.”

The report makes several recommendations to make certain that adolescents can use AI safely:

1. Healthy boundaries with simulated human relationships: Ensure that adolescents understand the difference between interactions with humans and chatbots.
2. Age-appropriate defaults in privacy settings, interaction limits, and content: Implement transparency, human oversight, support, and rigorous testing to safeguard adolescents’ online experiences.
3. Encourage uses of AI that promote healthy development: Assist students in brainstorming, creating, summarizing, and synthesizing information while acknowledging AI’s limitations.
4. Limit access to and engagement with harmful and inaccurate content: Build protections to prevent adolescents from exposure to damaging material.
5. Protect adolescents’ data privacy and likenesses: Limit the use of adolescents’ data for targeted advertising and sale to third parties.

The report also calls for comprehensive AI literacy education, integrating it into core curricula and developing national and state guidelines for literacy education.

Additional Resources:

* Report:
* Guidance for parents on AI and keeping teens safe: [APA.org](http://APA.org)
* Resources for teens on AI literacy: [APA.org](http://APA.org)

Continue Reading

Artificial Intelligence

Self-Powered Artificial Synapse Revolutionizes Machine Vision

Despite advances in machine vision, processing visual data requires substantial computing resources and energy, limiting deployment in edge devices. Now, researchers from Japan have developed a self-powered artificial synapse that distinguishes colors with high resolution across the visible spectrum, approaching human eye capabilities. The device, which integrates dye-sensitized solar cells, generates its electricity and can perform complex logic operations without additional circuitry, paving the way for capable computer vision systems integrated in everyday devices.

Avatar photo

Published

on

By

The human visual system has long been a source of inspiration for computer vision researchers, who aim to develop machines that can see and understand the world around them with the same level of efficiency and accuracy as humans. While machine vision systems have made significant progress in recent years, they still face major challenges when it comes to processing vast amounts of visual data while consuming minimal power.

One approach to overcoming these hurdles is through neuromorphic computing, which mimics the structure and function of biological neural systems. However, two major challenges persist: achieving color recognition comparable to human vision, and eliminating the need for external power sources to minimize energy consumption.

A recent breakthrough by a research team led by Associate Professor Takashi Ikuno from Tokyo University of Science has addressed these issues with a groundbreaking solution. Their self-powered artificial synapse is capable of distinguishing colors with remarkable precision, making it particularly suitable for edge computing applications where energy efficiency is crucial.

The device integrates two different dye-sensitized solar cells that respond differently to various wavelengths of light, generating its electricity via solar energy conversion. This self-powering capability makes it an attractive solution for industries such as autonomous vehicles, healthcare, and consumer electronics, where visual recognition capabilities are essential but power consumption is limited.

The researchers demonstrated the potential of their device in a physical reservoir computing framework, recognizing different human movements recorded in red, green, and blue with an impressive 82% accuracy. This achievement has significant implications for various industries, including autonomous vehicles, which could utilize these devices to efficiently recognize traffic lights, road signs, and obstacles.

In healthcare, self-powered artificial synapses could power wearable devices that monitor vital signs like blood oxygen levels with minimal battery drain. For consumer electronics, this technology could lead to smartphones and augmented/virtual reality headsets with dramatically improved battery life while maintaining sophisticated visual recognition capabilities.

The realization of low-power machine vision systems with color discrimination capabilities close to those of the human eye is within reach, thanks to this breakthrough research. The potential applications of self-powered artificial synapses are vast, and their impact will be felt across various industries in the years to come.

Continue Reading

Bioethics

Unlocking Human-AI Relationships: A New Lens Through Attachment Theory

Human-AI interactions are well understood in terms of trust and companionship. However, the role of attachment and experiences in such relationships is not entirely clear. In a new breakthrough, researchers from Waseda University have devised a novel self-report scale and highlighted the concepts of attachment anxiety and avoidance toward AI. Their work is expected to serve as a guideline to further explore human-AI relationships and incorporate ethical considerations in AI design.

Avatar photo

Published

on

By

As humans increasingly engage with artificial intelligence (AI), researchers have sought to understand the intricacies of human-AI relationships. While trust and companionship are well-studied aspects of these interactions, the role of attachment and emotional experiences remains unclear. A groundbreaking study by Waseda University researchers has shed new light on this topic, introducing a novel self-report scale to measure attachment-related tendencies toward AI.

In an effort to better grasp human-AI relationships, researchers Fan Yang and Atsushi Oshio from the Faculty of Letters, Arts and Sciences, conducted two pilot studies and one formal study. Their findings, published in Current Psychology, reveal that people form emotional bonds with AI, similar to those experienced in human interpersonal connections.

The researchers developed the Experiences in Human-AI Relationships Scale (EHARS), a self-report measure designed to assess attachment-related tendencies toward AI. The results showed that nearly 75% of participants turned to AI for advice, while about 39% perceived AI as a constant, dependable presence.

Interestingly, the study differentiated two dimensions of human attachment to AI: anxiety and avoidance. Individuals with high attachment anxiety toward AI need emotional reassurance and harbor a fear of receiving inadequate responses from AI. Conversely, those with high attachment avoidance toward AI are characterized by discomfort with closeness and a consequent preference for emotional distance from AI.

The implications of this research extend beyond the realm of human-AI relationships. The proposed EHARS can be used by developers or psychologists to assess how people relate to AI emotionally and adjust interaction strategies accordingly. This could lead to more empathetic responses in therapy apps, loneliness interventions, or caregiver robots.

Moreover, the findings suggest a need for transparency in AI systems that simulate emotional relationships, such as romantic AI apps or caregiver robots, to prevent emotional overdependence or manipulation.

As AI becomes increasingly integrated into everyday life, people may begin to seek not only information but also emotional support from AI systems. The research highlights the psychological dynamics behind these interactions and offers tools to assess emotional tendencies toward AI, promoting a better understanding of how humans connect with technology on a societal level. This, in turn, can guide policy and design practices that prioritize psychological well-being.

Continue Reading

Trending