Connect with us
We’re experimenting with AI-generated content to help deliver information faster and more efficiently.
While we try to keep things accurate, this content is part of an ongoing experiment and may not always be reliable.
Please double-check important details — we’re not responsible for how the information is used.

Bioethics

Unlocking Human-AI Relationships: A New Lens Through Attachment Theory

Human-AI interactions are well understood in terms of trust and companionship. However, the role of attachment and experiences in such relationships is not entirely clear. In a new breakthrough, researchers from Waseda University have devised a novel self-report scale and highlighted the concepts of attachment anxiety and avoidance toward AI. Their work is expected to serve as a guideline to further explore human-AI relationships and incorporate ethical considerations in AI design.

Avatar photo

Published

on

As humans increasingly engage with artificial intelligence (AI), researchers have sought to understand the intricacies of human-AI relationships. While trust and companionship are well-studied aspects of these interactions, the role of attachment and emotional experiences remains unclear. A groundbreaking study by Waseda University researchers has shed new light on this topic, introducing a novel self-report scale to measure attachment-related tendencies toward AI.

In an effort to better grasp human-AI relationships, researchers Fan Yang and Atsushi Oshio from the Faculty of Letters, Arts and Sciences, conducted two pilot studies and one formal study. Their findings, published in Current Psychology, reveal that people form emotional bonds with AI, similar to those experienced in human interpersonal connections.

The researchers developed the Experiences in Human-AI Relationships Scale (EHARS), a self-report measure designed to assess attachment-related tendencies toward AI. The results showed that nearly 75% of participants turned to AI for advice, while about 39% perceived AI as a constant, dependable presence.

Interestingly, the study differentiated two dimensions of human attachment to AI: anxiety and avoidance. Individuals with high attachment anxiety toward AI need emotional reassurance and harbor a fear of receiving inadequate responses from AI. Conversely, those with high attachment avoidance toward AI are characterized by discomfort with closeness and a consequent preference for emotional distance from AI.

The implications of this research extend beyond the realm of human-AI relationships. The proposed EHARS can be used by developers or psychologists to assess how people relate to AI emotionally and adjust interaction strategies accordingly. This could lead to more empathetic responses in therapy apps, loneliness interventions, or caregiver robots.

Moreover, the findings suggest a need for transparency in AI systems that simulate emotional relationships, such as romantic AI apps or caregiver robots, to prevent emotional overdependence or manipulation.

As AI becomes increasingly integrated into everyday life, people may begin to seek not only information but also emotional support from AI systems. The research highlights the psychological dynamics behind these interactions and offers tools to assess emotional tendencies toward AI, promoting a better understanding of how humans connect with technology on a societal level. This, in turn, can guide policy and design practices that prioritize psychological well-being.

Bioethics

The Ethics of AI Mental Health Chatbots for Kids: A Call for Caution

AI mental health apps may offer a cheap and accessible way to fill the gaps in the overstretched U.S. mental health care system, but ethics experts warn that we need to be thoughtful about how we use them, especially with children.

Avatar photo

Published

on

The United States is facing a significant challenge when it comes to accessing mental health care. With spotty insurance coverage and a shortage of qualified professionals, many individuals are forced to wait or seek costly alternatives. In this context, artificial intelligence (AI) has emerged as a potential solution, with numerous AI-based mental health apps and chatbots available on the market.

However, there are growing concerns about the ethics of relying on AI for mental health care, particularly in children. While AI may offer a convenient and accessible way to address gaps in our system, it is essential to consider the unique needs and vulnerabilities of young people.

Children’s social development, family dynamics, and decision-making processes differ significantly from those of adults, making them more susceptible to the risks associated with AI-based mental health care. Research suggests that children may become attached to chatbots, potentially impairing their ability to form healthy relationships with others.

Moreover, AI chatbots lack access to crucial contextual information, such as a child’s social context and family dynamics, which is essential for effective mental health care. They also tend to exacerbate existing health inequities, particularly in children from marginalized communities who are already at a higher risk of experiencing adverse childhood events.

The U.S. Food and Drug Administration has only approved one AI-based mental health app for treating major depression in adults, highlighting the need for regulations to safeguard against misuse, lack of reporting, and inequity in training data or user access.

Experts are calling for caution when it comes to using AI chatbots for mental health care in children. Rather than advocating for a complete ban on this technology, they emphasize the importance of thoughtful consideration in its use.

Developers should be encouraged to partner with experts to better understand how AI-based therapy chatbots can be developed and used responsibly, particularly in children. This collaboration can help ensure that these tools are informed by research and engagement with children, adolescents, parents, pediatricians, or therapists, ultimately promoting more effective and equitable mental health care for all.

In conclusion, while AI has the potential to revolutionize mental health care, it is essential to approach its use with caution, particularly in children. By prioritizing thoughtful consideration and responsible development, we can harness the benefits of this technology while minimizing its risks and ensuring that every individual, regardless of age or background, receives the mental health care they deserve.

Continue Reading

Biochemistry Research

Bringing Balance to Genetics Education: Why We Need to Teach Eugenics in College Curriculum

To encourage scientists to speak up when people misuse science to serve political agendas, biology professor Mark Peifer of the University of North Carolina at Chapel Hill argues that eugenics should be included in college genetics curriculums.

Avatar photo

Published

on

As scientists, we often find ourselves at the forefront of groundbreaking discoveries that have far-reaching implications for society. However, when our work is misused to serve political agendas, it’s essential that we speak up and hold ourselves accountable. That’s why biology Professor Mark Peifer from the University of North Carolina at Chapel Hill argues that eugenics should be included in college genetics curriculums.

In his opinion paper published in Trends in Genetics, Peifer makes a compelling case for why understanding the history of eugenics is critical for up-and-coming scientists. He reminds us that eugenics is not dead but continues to influence science and policy today. By incorporating discussions on eugenics into our undergraduate classes, we can empower students to critically evaluate the misuse of science and speak out against it.

Peifer’s approach in his molecular genetics course provides a powerful example of how this can be done effectively. He led his students through the history of eugenics, from its origins as a term coined in 1883 to describe planned breeding for “racial improvement,” to its global popularity during the 20th century and the horrific consequences that followed, including forced sterilization, racist immigration policies, and genocide in Nazi Germany.

The class also covered how some founding fathers of genetics and molecular biology, like James Watson, championed eugenics scientifically. This is a crucial part of the narrative, as it highlights the tension between scientific progress and societal responsibility. As Peifer writes, “Science provides technology, but society decides how to use it.”

To illustrate the relevance of eugenics in today’s world, Peifer ended the class by asking his students to discuss a series of questions surrounding in vitro fertilization (IVF) and embryo screening: Should we allow IVF? Should we allow embryo screening for cystic fibrosis? Should we allow screening for chromosomal sex? Should we allow screening for height?

These questions are not only thought-provoking but also deeply personal, as they touch on issues that many of us will face at some point in our lives. By encouraging students to engage with these complex topics, Peifer is providing them with the critical thinking skills and moral compass needed to navigate the rapidly evolving landscape of genetic science.

As Peifer notes, “Some might argue that with all the complex topics to cover, we don’t have time for a historical discussion with political overtones on our syllabi.” However, he counters that understanding the history of eugenics is essential for up-and-coming scientists, as it helps them develop a nuanced perspective on the ethics and responsibilities that come with scientific progress.

In conclusion, incorporating discussions on eugenics into college genetics curriculums can have a profound impact on students’ understanding of their role in society. By teaching this complex topic, we can empower them to think critically about the consequences of science and technology, and to make informed decisions about their own lives and the world around them. As Peifer writes, “Our students will also be citizens and will help friends and family navigate complex decisions with science at their base.”

Continue Reading

Behavior

The Invisible Illness: Why Long COVID Patients Need to Be Believed

People living with Long COVID often feel dismissed, disbelieved and unsupported by their healthcare providers, according to a new study.

Avatar photo

Published

on

The lives of individuals living with Long COVID are marked by a constant struggle. Despite the physical symptoms that persist for weeks or even months after the initial COVID-19 infection, many people experience feelings of dismissal, disbelievement, and isolation from their healthcare providers. A recent study conducted at the University of Surrey sheds light on this phenomenon, highlighting the need for empathy and understanding in the treatment of Long COVID patients.

The study, published in the Journal of Health Psychology, involved in-depth interviews with 14 individuals who had experienced Long COVID symptoms for more than four weeks. The participants, aged between 27 to 63, shared their experiences of feeling ignored, discredited, and unsupported by healthcare providers. They reported being told that their symptoms were “all in their head” or that they needed to “get over it.” This lack of understanding led many to feel like they had to prove the physical nature of their illness, often rejecting psychological support as a result.

According to Professor Jane Ogden, co-author of the study and an expert in health psychology, the problem lies not with the patients refusing help but rather with the deep-seated need for people to be believed. When healthcare providers offer psychological support instead of medical care, it can be misinterpreted as dismissive or insulting.

The statistics are stark: 1.9 million people in the UK live with Long COVID, experiencing symptoms such as fatigue, difficulty concentrating, muscle aches, and shortness of breath. It is imperative that healthcare providers approach these patients with empathy and understanding, acknowledging the physical nature of their illness while also offering supportive care.

As Saara Petker, clinical psychologist and co-author of the study, emphasizes, “Medical advice is crucial – but psychological support must be offered with care. If it’s seen as replacing medical help, it can feel dismissive.” By listening to the experiences of Long COVID patients and addressing their concerns, healthcare providers can play a vital role in alleviating this invisible illness.

Continue Reading

Trending