Articles

  • 2 weeks ago | techxplore.com | Brian D. Earp |Sebastian Mann |Simon M. Laham

    "I'm really not sure what to do anymore. I don't have anyone I can talk to," types a lonely user to an AI chatbot. The bot responds: "I'm sorry, but we are going to have to change the topic. I won't be able to engage in a conversation about your personal life." Is this response appropriate? The answer depends on what relationship the AI was designed to simulate. Different relationships have different rulesAI systems are taking up social roles that have traditionally been the province of humans.

  • 2 weeks ago | theconversation.com | Brian D. Earp |Sebastian Mann |Simon M. Laham

    “I’m really not sure what to do anymore. I don’t have anyone I can talk to,” types a lonely user to an AI chatbot. The bot responds: “I’m sorry, but we are going to have to change the topic. I won’t be able to engage in a conversation about your personal life.” Is this response appropriate? The answer depends on what relationship the AI was designed to simulate. Different relationships have different rulesAI systems are taking up social roles that have traditionally been the province of humans.

  • Nov 25, 2024 | nyaspubs.onlinelibrary.wiley.com | Brian D. Earp |Sebastian Mann |Peng Liu |Ivar R. Hannikainen

    INTRODUCTION Since the introduction of large language models (LLMs), generative artificial intelligence (AI) has become a focal point of debate.1 The impressive generative capabilities of LLMs enable the production of high-quality outputs.2 However, this technology is not without its challenges: It has the potential to generate both beneficial and harmful content.3 Whether positive or negative, the content generated by AI results from the interaction between the prompting human and the AI...

  • Nov 12, 2024 | nature.com | Sebastian Mann |Anuraag A Vazirani |Brian D. Earp |Timo Minssen |I. Glenn Cohen

    In this Comment, we propose a cumulative set of three essential criteria for the ethical use of LLMs in academic writing, and present a statement that researchers can quote when submitting LLM-assisted manuscripts in order to testify to their adherence to them.

  • Oct 30, 2024 | researchgate.net | Brian D. Earp |Peng Liu

    3 H1. For equivalent beneficial outcomes, more credit will be attributed to a human user when using a personalized versus standard LLM. H2. For equivalent harmful outcomes, blame attributions will be comparable regardless of LLM type. 2 Method 2.1 Procedure and measuresWe created six vignettes based on a 3 (LLM condition: personalized vs. standard vs. control condition is used) × 2 (outcome condition: beneficial and harmful) betweenin which no LLM -subjects design.

Contact details

Socials & Sites

Try JournoFinder For Free

Search and contact over 1M+ journalist profiles, browse 100M+ articles, and unlock powerful PR tools.

Start Your 7-Day Free Trial →