ChatGPT Web: Context Window and Token Limit

ChatGPT is a powerful AI tool that can assist developers in writing code. However, it has a context window and token limit that can affect its performance. This post explores these limitations and how developers can work around them.

ChatGPT Context Window Token Limit

A common question that developers have when using ChatGPT is about the context window token limit.

The context window refers to part of the conversation history that ChatGPT considers when generating a response. Any messages beyond this window (sent earlier in the conversation) are not considered when generating the response. The context window is important because it determines how much of the conversation history ChatGPT can use to generate a response.

The token limit of the context window is the maximum number of tokens that ChatGPT can consider within the context window. If the conversation history exceeds this token limit, ChatGPT will truncate the history and only consider the most recent tokens.

While there is no official documentation on the exact context window and token limit for ChatGPT, empirical evidence suggests that the token limit for the context window is around 4096 tokens to 8192 tokens.

Number of Tokens for Text

So how much text or code does this translate to?

According to OpenAI Tokenizer calculator and OpenAI Pricing FAQ:

A helpful rule of thumb is that one token generally corresponds to about 4 characters of text for common English text. This translates to roughly ¾ of a word (so 100 tokens ~= 75 words).

For English text, 1 token is approximately 4 characters or 0.75 words. As a point of reference, the collected works of Shakespeare are about 900,000 words or 1.2M tokens.

Number of Tokens for Source Code

Here are more sample calculations for the number of tokens in source code for different programming languages:


  • React jsx (100 lines): 700 tokens
  • React jsx (200 lines): 1,500 tokens


  • SQL script (100 lines): 1,150 tokens
  • SQL script (200 lines): 2,500 tokens


  • Python source code file (100 lines): 1,000 tokens
  • Python source code file (200 lines): 1,700 tokens

So if you are working with several large source code files with thousands of lines, you might exceed the token limit of ChatGPT's context window.

Working Around the Limitations

To work around the context window and token limit, you can take the following approaches:

  1. Break Down Inputs: Instead of feeding the entire source code file at once, break it down into smaller chunks and feed them sequentially to ChatGPT. This way, you can ensure that the context window and token limit are not exceeded.

  2. Use Relevant Context: Focus on providing the most relevant context to ChatGPT. Instead of feeding the entire conversation history, provide only the most recent and relevant information to generate accurate responses.

  3. Optimize Code: If you are working with large source code files, consider optimizing the code to reduce the number of tokens. Remove unnecessary comments, whitespace, and redundant code to make the input more concise.

Using a workflow automation tools like 16x Prompt can help streamline the process of breaking down inputs and managing the context window and token limit effectively:

16x Prompt
  • Code Context Management: You can select which source code files to include in the context window and manage the token limit effectively.
  • Token Limit Monitoring: Keep track of the token count to ensure that you do not exceed the limit.
  • Code Optimization: The tool can help you optimize the code by removing unnecessary comments, whitespace, and redundant code to reduce the token count.
  • Code Refactoring: You can leverage the tool to perform refactoring tasks to break down large code files into smaller, more manageable chunks.

Download 16x Prompt

Join 700++ users from top companies. Boost your productivity with ChatGPT coding.