Large language models (LLMs) trained on previous iterations of AI-generated material produce outputs that lack substance and nuance, a new study has found. The findings present a new challenge for AI developers, who rely on limited human-generated data sets for content. Also read: AI deepfakes are making it hard for US authorities to protect children