1. Legal and ethical issues: Large language models (LLMs) can violate copyright laws by producing content that is too similar to the original, and there are no laws preventing service providers from training their models on any kind of data without consent.
2. Accuracy and bias issues: LLMs are only as good as the data they were trained on, and online learning makes it difficult to vet all the data for accuracy, fairness, and bias. The infamous Twitter bot Tay showed how easily influenced algorithms can be subverted by malicious actors to spread misinformation, inflame hatred, and entice violence.
3. Behavioral issues: Emotional AI applications designed to recognize human emotions can give advice that appears technically sane but can prove harmful in certain circumstances or when the context is missing or misunderstood. An AI counseling experiment run by a mental health tech company called Koko drew criticism for giving users responses partially or wholly written by AI without informing them adequately that they were not interacting with real people.