The Challenge

AI conversational agents (CAs) are increasingly being used in counseling, coaching, and companionship roles despite having a limited evidentiary basis. This is due to several factors:

  • Disciplinary silos: Psychology, computer science, and philosophy researchers rarely collaborate
  • Short-term studies: Most research focuses on immediate effects rather than long-term outcomes
  • Narrow focus: Current AI emphasizes short-term subjective well-being (SWB) over character development

Worryingly, current AI conversational agents risk fostering moral atrophy by focusing on short-term gratification and entertainment, potentially undermining the development of essential character virtues (CVs) necessary for genuine human flourishing.

3
Disciplines United

Psychology, Computer Science, Philosophy

Long-Term
Research Focus

Months to years, not days

Our Central Hypothesis

We conjecture that nearly all extant AI conversational agents trained to enhance short-term subjective well-being (e.g., engagement, amusement) may not translate to character virtues because they:

close

Focus less on virtue-inducing emotions (achievement-oriented like pride, or other-directed like gratitude)

close

Do not consider philosophical underpinnings of long-term and collective focus (vs. immediate and individual)

close

Do not overcome limitations of short-term AI architecture that character virtue development requires

Key Research Questions

Our interdisciplinary approach addresses critical gaps in current AI development

1

Conceptual & Practical

How do we design and deploy AI conversational agents that are psychologically and ethically guided for character virtues?

2

Empirical Evaluation

How effective are the best-designed AI conversational agents at promoting character virtues over long-term engagements?

Our Approach

Combining novel interdisciplinary frameworks with rigorous empirical testing

groups

Expert Focus Groups

Bringing together experts across psychology, computer science, and philosophy to develop comprehensive frameworks for virtue-centered AI design.

build

Technical Development

Proposing and implementing several technical approaches to designing AI conversational agents that optimize for character virtues rather than short-term engagement.

science

Rigorous Testing

Conducting experimental and longitudinal studies with behavioral assessments (e.g., phone use patterns, video analysis) to obtain evidence on long-term effectiveness.

public

Open Science

Publishing results, open-sourcing virtue-centered AI agents, sharing datasets, and creating educational materials to advance the field.

Expected Outcomes

Our project will produce multiple valuable outputs for the research community and beyond

article

Research Publications

Multiple peer-reviewed papers presenting our frameworks, technical approaches, and empirical findings

code

Open Source AI Agents

Virtue-centered conversational AI systems available for researchers and developers to use and build upon

storage

Research Datasets

Comprehensive datasets from our longitudinal studies to enable future research

language

Educational Resources

Websites, videos, and documentation showcasing key aspects of AI creation and effectiveness

school

Presentations & Talks

Conference presentations and public talks to disseminate our findings widely

policy

Policy Guidance

Frameworks to guide future research and public policy on AI use for character virtues and human flourishing

Long-Term Impact

If funded, our work will serve as a foundation to guide future research on AI conversational agents for enhancing character virtues, inform the adoption of AI conversational agents across diverse contexts, and shape public policy on AI use for character virtues and long-term human flourishing.

By bridging the gap between short-term engagement optimization and genuine character development, we aim to redirect the trajectory of AI development toward systems that truly serve human flourishing.

Supported By

This research is generously funded by