Skip to main content
AI Tool Radar

What is Large Language Model (LLM)?

A neural network trained on massive text corpora that can generate, translate, summarize, and reason over natural language.

Full Definition

A Large Language Model (LLM) is a deep neural network — typically based on the transformer architecture — trained on hundreds of billions to trillions of tokens of text drawn from the internet, books, code repositories, and other sources. Through this training, the model learns statistical patterns of language that allow it to predict the next token in a sequence, which in turn enables it to generate coherent text, answer questions, translate between languages, summarize documents, and perform complex reasoning. Notable LLMs include GPT-5 (OpenAI), Claude (Anthropic), Gemini (Google), and Llama (Meta). The 'large' in the name refers to the sheer number of parameters — often in the tens or hundreds of billions — which gives the model its broad generalization capability.

Tools that use Large Language Model (LLM)

Related Terms