Logo

Context Window

Technical Capabilities
Letter: C

The maximum amount of text that an AI model can process and remember in a single interaction.

Detailed Definition

The context window refers to the maximum amount of text (measured in tokens) that a language model can process and maintain in its working memory during a single interaction or conversation. This limitation is crucial because it determines how much context the model can consider when generating responses. For example, if a model has a context window of 4,000 tokens, it can 'remember' approximately 3,000-4,000 words of conversation history and use that information to inform its responses. When the conversation exceeds this limit, older parts of the conversation are 'forgotten.' Recent advances in AI have dramatically increased context windows - models like GPT-4 Turbo support up to 128,000 tokens, while some newer models can handle even longer contexts. Larger context windows enable models to maintain coherence over longer conversations, process longer documents, and perform more complex reasoning tasks that require considering extensive information.