UAE Launches Falcon 2 Open-Source LLMs
The Technology Innovation Institute (TII) has unveiled the Falcon 2 large language models (LLMs), the second generation of LLMs from the United Arab Emirates’ state research organization. The two LLMs continue the UAE’s effort to encourage interest and investment in its generative AI projects by making its models open source.
Falcon 2 AI
Falcon 2 comes in two sizes. The Falcon 2 11B is an 11-billion parameter model trained on 5.5 trillion tokens, while Falcon 2 11B VLM is the same size but is what TII calls a vision-to-language model with multi-modal capabilities that enable it to understand visual information and explain it in text form. The two models can speak many languages, including English, French, Spanish, German, and Portuguese. TII is part of the UAE’s Advanced Technology Research Council (ATRC). The country has been experimenting with integrating generative AI into its services since last year.
The VLM version is aimed at helping businesses with tasks like document management, archiving, and assisting people with impaired vision. TII claims the Falcon 2 11B model beats two of its biggest American competitors in the open-source LLM space. According to its benchmark tests, Falcon 2 11B comes out ahead of both Meta’s Llama 3 LLM as well as Google’s Gemma 7B model in some tests.
“AI is continually evolving, and developers are recognizing the myriad benefits of smaller, more efficient models. In addition to reducing computing power requirements and meeting sustainability criteria, these models offer enhanced flexibility, seamlessly integrating into edge AI infrastructure, the next emerging megatrend,” TII AI cross-center unit executive director Hakim Hacid said. “Furthermore, the vision-to-language capabilities of Falcon 2 open new horizons for accessibility in AI, empowering users with transformative image to text interactions.”
The open-source model offers developers unrestricted access to Falcon 2 with only minimal restrictions. That might help enhance their appeal despite their limits in other ways. The institute plans to expand this lineup with more diversified models in the future, incorporating advanced machine learning techniques such as the ‘Mixture of Experts’ (MoE). This method will enhance the models’ efficiency and adaptability, leveraging specialized networks that collaborate to boost decision-making and predictive capabilities. Plus, they are relatively easy to incorporate into computers and other devices because they only need a single graphics processing unit (GPU).
Follow @voicebotaiFollow @erichschwartz
Meta Unveils Llama 3 Generative AI Model, Claims It As ‘Most Capable’ Open LLM Around
Dubai Introduces Fares Municipal Virtual Assistant for Voice and WhatsApp