OpenAI’s New Foundry Program Offers LLM Clients Dedicated Processing and Fine-Tune Controls
OpenAI has introduced a new platform for customers interested in using its large language models on their own dedicated server space, as first shared on Twitter by developer Travis Fischer. The generative AI developer has begun pitching early access to this new Foundry program to a handful of potential clients who want to both scale up their LLM use and have more control over fine-tuning the model.
Foundry LLM
Foundry will provide OpenAI’s clients with a “static allocation” of processing power, referred to as “compute capacity.” The client will have the same observation and analytics tools as OpenAI within that area, allowing them to monitor the functioning of the OpenAI LLM they use. Paying for Foundry will also give the client more flexibility in how they upgrade the model and fine-tune the data used to train it. The leasing package also includes support from OpenAI, with the price varying depending on how long and how many compute units the company wants.
“Foundry is a platform for running OpenAl models on dedicated capacity. It is designed for cutting-edge customers running larger workloads, allowing inference at scale with full control over the model configuration and performance profile,” OpenAI’s product brief for Foundry explains. “Coming soon, OpenAl will offer more robust fine tuning options for our latest models. Foundry will be the platform for serving those models.”
OpenAI isn’t aiming Foundry at small firms. The service is for scaling up a corporation’s generative AI capacity to match it’s size. Even the minimal variation on GPT-3.5 costs $78,000 for three months and $264,000 for a year. That’s a lot more potential revenue than the $20 a month for ChatGPT Pro available to consumers. Still, companies might see Foundry as a way to get on the leading edge of generative AI if their pockets are deep enough. The text-generating models in the price list mention a 32,000 max context window. GPT-3.5, the model serving as the basis for ChatGPT and Microsoft’s Bing AI chatbot, has a max context window of 4,000. The bigger number hints that OpenAI is getting close to releasing GPT-4, or maybe something more.
Follow @voicebotaiFollow @erichschwartz
GitHub CoPilot Upgrades Generative AI Coding Suggestions and Security