schedule 5 min read

LLM
Large language model, or a very strong “next-word guesser”

calendar_today Published: April 22, 2026
update Updated: April 22, 2026
folder_open AI
Illustration of language models generating text (Google DeepMind Visualising AI)

“LLM” went mainstream with tools like ChatGPT. In plain language: a large language model is software trained to work with text at scale—and its core trick is deceptively simple.

An LLM is a whiz at “what comes next?”

In one sentence, an LLM is a very capable next-token predictor : given some text, it scores what should follow, then repeats.

If you hear “Once upon a time, in a certain place…” you can already guess a few likely continuations. The model has seen enormous amounts of text, so it internalizes many such patterns—not by memorizing your chat, but by learning statistics of language at scale.

Stack many of those tiny decisions in a row, and you get a long reply that reads as if someone planned it—even though the training objective was basically “make the next word look plausible.”

Why the word “large”?

“Large” usually points to two things:

  • Training data: far more text (and code, and more) than any human can read—books, the web, forums, and other sources, filtered and mixed depending on the project.
  • Parameters: the adjustable numbers inside the model. Frontier systems can have hundreds of billions to trillions of them. More capacity often (not always) means richer patterns—and heavier compute to train and run.

As scale grows, models can pick up not only word n-grams but also more context, nuance, and something closer to “common sense” in language—still with limits and blind spots, but impressively useful for many tasks.

Summary

  • check_circle An LLM builds text by repeatedly choosing high-probability next words (tokens), not by “looking things up” like a search engine in one step.
  • check_circle “Large” points to huge training data and very big models (many parameters).
  • check_circle Today’s chat assistants (e.g. ChatGPT-class products, Gemini) are often LLMs with extra tooling and safety layers on top.

sell Tags

Read this article in Japanese

book Related

More in English; Japanese site has the full set of explainers too.

ChatGPT (EN)

The chat product built on an LLM-style stack, in everyday words.

arrow_forward

Deep learning (EN)

How layered models learn from data—often under the hood of LLMs.

arrow_forward

LLM (JP)

Japanese version of this article.

arrow_forward
Author

written by

RosyRuby🌹 / IT writer

Making technology understandable, one plain-language article at a time.

bookmark Read next

category: AI

Prompts (EN)

Steer the model with clear instructions.

arrow_forward

category: AI

GPU (EN)

The accelerators that make big training runs practical.

arrow_forward

category: AI

Prompts (JP)

The Japanese explainer for asking AI clearly.

arrow_forward
search