Large language models (LLMs), essentially an algorithm that can take massive sets of data to predict and generate text, can be subject to “hallucination” or making stuff up, Khan warned.

“We need to create … some guardrails around it, because as you can imagine, LLMs could amplify misinformation and that doesn’t help us,” he said.