OpenAI Upgrades GPT-4 and GPT-3.5 Turbo Models, Reduces API Prices

OpenAI is rolling out updates for the GPT-3.5 Turbo and GPT-4 large language models and cutting the cost for organizations to access them through its APIs. The improved models also include a new ‘function calling’ feature that enables developers to simply describe a program operation conversationally to the AI, which will then generate computer code to do so, an increasingly common aspect of enterprise generative AI services.

Function Calling Context

Developers can employ function calling to design chatbots able to retrieve information beyond the standard large language model and local databases, reaching out to approved tools from outside the organization. Crucially, the AI can recognize when it should initiate function calling and automatically translate natural language requests into code suitable for a database query. The AI model then reformulates the computer language response into a ChatGPT-style answer.

“Developers can now describe functions [to the model], and have the model intelligently choose to output a JSON object containing arguments to call those functions. This is a new way to more reliably connect GPT’s capabilities with external tools and APIs,” OpenAI explained in a blog post. “These models have been fine-tuned to both detect when a function needs to be called (depending on the user’s input) and to respond with JSON that adheres to the function signature. Function calling allows developers to more reliably get structured data back from the model.”

OpenAI has also produced a new variant of GPT-3.5 Turbo with a much larger context window. A context window contains the content from a prompt that a model will use to answer questions as measured in digestible language bits called tokens with around three tokens per word. The previous version of GPT-3.5 Turbo uses 4,000 tokens, while GPT-4 has an 8,000-token and 32,000-token variant. The new Turbo model’s context window quadruples the token capacity to 16,000, or about 20 pages of text. That’s nowhere near Anthropic’s recently released Claude generative AI with a context window of 100,000 tokens, but for everyday enterprise use, it’s probably plenty big.

Businesses are also likely to find the prices for both model sizes appealing. OpenAI has cut the cost of the smaller one by 25% to $0.0015 per 1,000 input tokens and $0.002 per 1,000 output tokens. And though the larger version’s context window is four times the size, the price is only double what the smaller one costs, coming to $0.003 per 1,000 input tokens and $0.004 per 1,000 output tokens. The Ada model developed by OpenAI to embed text has also become a lot cheaper. It now comes to $0.0001 per 1,000 tokens, a 75% reduction. OpenAI pointed to success in making the models more efficient as the source of the price changes.

New ChatGPT Plugins Link Present-Day Internet and Third-Party Apps to Generative AI Chatbot

OpenAI Will Award $1M for Generative AI Cybersecurity Programs

Anthropic Generative AI Claude Expands From 9K to 100K Tokens, Can Read a Novel and Write a Book Report in Under a Minute