What is Large Language Model (LLM)?
A neural network trained on massive text corpora that can generate, translate, summarize, and reason over natural language.
Full Definition
A Large Language Model (LLM) is a deep neural network — typically based on the transformer architecture — trained on hundreds of billions to trillions of tokens of text drawn from the internet, books, code repositories, and other sources. Through this training, the model learns statistical patterns of language that allow it to predict the next token in a sequence, which in turn enables it to generate coherent text, answer questions, translate between languages, summarize documents, and perform complex reasoning. Notable LLMs include GPT-5 (OpenAI), Claude (Anthropic), Gemini (Google), and Llama (Meta). The 'large' in the name refers to the sheer number of parameters — often in the tens or hundreds of billions — which gives the model its broad generalization capability.
Tools that use Large Language Model (LLM)
ChatGPT
The most widely used AI assistant with 900M+ weekly users
Claude
Best-in-class reasoning with 1M token context window
Gemini
NewGoogle's AI assistant with deep Workspace integration and 1M token context
Grok
NewxAI chatbot with real-time X data and fast API access
Perplexity
NewAI-powered search engine with real-time citations and source transparency
Microsoft Copilot
NewAI assistant integrated into Microsoft 365, Windows, and Edge