Credit: AI-generated image Conversational agents (CAs) such as Alexa and Siri are designed to answer questions, offer suggestions—and even display empathy. However, new research finds they do poorly compared to humans when interpreting and exploring a user’s experience. CAs are powered by large language models (LLMs) that ingest massive amounts of human-produced data, and thus can be prone to the same biases as the humans from which the information comes. Researchers from Cornell University, Olin College and Stanford University tested this theory by prompting CAs to display empathy while conversing Read More
No comments:
Post a Comment