Large language models are a type of artificial intelligence currently taking the world by storm. They include OpenAI’s ChatGPT, Google’s Bard and various others. All are trained on vast databases of written articles in which they measure the likelihood of a word appearing, given the sequence of words that appear before it.
Armed with that knowledge, the AI produces responses to a given prompt by listing the most likely sequence of words that the model suggests. Computer scientists have further refined these processes and fine-tuned the capabilities of these systems to improve the output.
The results are by turns impressive, confusing and frightening. These AIs have the ability to write jokes, produce poetry and mimic literary styles. But they also confidently make mistakes, sometimes compounding them with a series of errors, a phenomenon that AI engineers call hallucinating.