Anthropic Claude

Anthropic Generative AI Claude Expands From 9K to 100K Tokens, Can Read a Novel and Write a Book Report in Under a Minute

Generative AI startup Anthropic has augmented the context window of its conversational AI chatbot Claude from 9,000 tokens to 100,000 tokens. Claude can now, theoretically, absorb around 75,000 words at once, the equivalent of reading a novel and writing a report on it in a minute or less.

Claude Context

A context window defines how much content from a prompt the model will use to answer questions, while tokens are how large language models divide up words into digestible bits, with a commonly cited average of three tokens per word. 100,000 tokens is a huge leap from an already advanced position. For comparison, OpenAI’s ChatGPT uses 4,000 tokens, while the standard version of GPT-4 offered for developers comes in at 8,000 tokens. Even the biggest GPT-4 variation only reaches 32,000 tokens, not quite a third of the new Claude’s limit. Having all of that in a context window means a conversation with Claude won’t suddenly derail with a made-up answer because it has filled up its memory. That’s why Microsoft set a limit of five responses from Bing AI at first.

A human reader can get through a text of 100,000 tokens in approximately five hours, with additional thinking and analysis time needed to explain and discuss the text with others. Claude does the same in 0.33% of just the time it would take to read the text. As a test, Anthropic uploaded The Great Gatsby to Claude, then did so again with a single line modification calling Mr. Carraway “a software engineer that works on machine learning tooling at Anthropic.” The AI picked out the difference in under 22 seconds.

“Beyond just reading long texts, Claude can help retrieve information from the documents that help your business run. You can drop multiple documents or even a book into the prompt and then ask Claude questions that require synthesis of knowledge across many parts of the text,” Anthropic explained in a blog post. “For complex questions, this is likely to work substantially better than vector search based approaches. Claude can follow your instructions and return what you’re looking for, as a human assistant would!”

Anthropic probably isn’t going to set off a rapid race to expand context windows, at least for now. Companies are still experimenting with generative AI and haven’t yet started to demand that size of a context window. And the expense is potentially prohibitive. Enterprises license the standard, 8,000-token GPT-4 model API at a rate of $0.03 for every thousand tokens submitted in a prompt and $0.06 for every thousand tokens in the AI’s response. The 32,000-token model costs double. Claude hasn’t changed its pricing yet, charging $0.01 and $0.03  per thousand tokens in prompts and completions, respectively. Anthropic has a $300 million capital runway from investors, but the price seems certain to rise, barring future technical breakthroughs.

Anthropic Unveils Generative AI Chatbot Claude

Google Invests $300M in OpenAI Rival Anthropic

Stability AI Releases Open-Source Large Language Model StableLM