Anthropic Raises $450M to Accelerate Generative AI Assistant Development
Generative AI startup Anthropic has raised $450 million in a Series C funding round led by Spark Capital. The new capital ups the total venture capital investment in Anthropic to about $1.45 billion in only a couple of years, though the company hasn’t disclosed its current valuation.
Anthropic, the creator of the generative AI chatbot Claude, is seen as an alternative to OpenAI and ChatGPT. The startup had early backing from Google, which also provides its cloud services. It recently partnered with Zoom, an investor in the new round, to embed Claude on Zoom’s communications platform. Spark’s leadership in the round comes not long after the VC firm hired previous Anthropic investor Fraser Kelton, who also happens to be OpenAI’s former head of product.
“We are thrilled that these leading investors and technology companies are supporting Anthropic’s mission: AI research and products that put safety at the frontier,” Anthropic CEO Dario Amodei said. “The systems we are building are being designed to provide reliable AI services that can positively impact businesses and consumers now and in the future.”
Anthropic’s ambitions include extending Claude’s reach and technical functions. The startup already made a leap in that direction this month when it widened Claude’s context window from 9,000 tokens to 100,000 tokens. That means Claude could process and remember around 75,000 words at once. For comparison, the standard version of OpenAI’s GPT-4 model is 8,000 tokens, and the largest maxes out at 32,000 tokens. Beyond Claude, Anthropic reportedly wants to get ahead of the current field when it comes to designing personalized generative AI assistants.
“Anthropic was founded to build AI products that people can rely on and generate research about the opportunities and risks of AI,” the company said in a statement. “Our team is focused on AI alignment techniques that allow AI systems to better handle adversarial conversations, follow precise instructions, and generally be more transparent about their behaviors and limitations.”