“Medical knowledge and practices change and evolve over time, and there’s no telling where in the timeline of medicine ChatGPT pulls its information from when stating a typical treatment,” she says. “Is that information recent or is it dated?”
Users also need to beware how ChatGPT-style bots can present fabricated, or “hallucinated,” information in a superficially fluent way, potentially leading to serious errors if a person doesn’t fact-check an algorithm’s responses. And AI-generated text can influence humans in subtle ways. A…