DBRX 1

Databricks Releases Open-Source LLM DBRX, Claims to Out-Perform Llama 2, Grok-1, and Even GPT-3.5

Data storage and analytics unicorn Databricks has introduced a new open-source large language model (LLM) called DBRX. The company claims DBRX can beat most rivals in the open-source LLM space and even some closed LLMs like OpenAI’s GPT-3.5.

DBRX Debut

DBRX is a 132 billion parameter mixture-of-experts (MoE) transformer model trained on 12 trillion tokens of text and code data. DBRX is licensed under an open-use model that allows commercial deployment but with certain restrictions, such as compliance with Databricks’ acceptable use policy – a contrast to open-source licenses of some competing models. According to benchmarks published by Databricks, DBRX comes out ahead of other open LLMs Llama 2, Mixtral 8x7B, and Grok-1 on the MMLU language understanding test, with wider performance gaps in coding ability (HumanEval) and math tasks (GSM8K). You can see the comparison above.

“Databricks’ mission is to deliver data intelligence to every enterprise by allowing organizations to understand and use their unique data to build their own AI systems,” Databricks wrote in its announcement of DBRX. ‘We believe that pushing the boundary of open source models enables generative AI for all enterprises that is customizable and transparent.”

DBRX stems from Databricks’ $1.3 billion purchase of LLM training and generative AI tool MosaicML, the company behind the open-source MPT models. Databricks customers can now integrate DBRX into their systems through APIs, benefiting from its capabilities in GenAI-powered products and even training their own DBRX-class models using Databricks’ tools and data. The unveiling of DBRX reflects broader trends in large language models toward advanced model architectures like MoE following previous phases of parameter scaling and massive dataset expansion to drive performance gains.

But, open-source and open-access don’t mean lesser quality compared to those built within closed systems using proprietary models and training. Databricks boasted that DBRX has the same kind of success against OpenAI’s GPT-3.5 model. Though GPT-3.5  isno longer the leading edge of OpenAI’s LLM portfolio, is more widely used and easier to access than GPT-4.

“At Databricks, our vision has always been to democratize data and AI. We’re doing that by delivering data intelligence to every enterprise — helping them understand and use their private data to build their own AI systems. DBRX is the result of that aim,” Databricks CEO Ali Ghodsi said. “All in all, DBRX is setting a new standard for open source LLMs — it gives enterprises a platform to build customized reasoning capabilities based on their own data.”

Data and AI Firm Databricks Raises $500M

Databricks Pays $1.3B for Generative AI Startup MosaicML

Cleanlab Raises $25M to Wipe Out Generative AI Hallucinations