Connect with us

Business

China’s search engine pioneer unveils open-source large language model to rival OpenAI

Published

on

In February, Sogou founder Wang Xiaochuan said on Weibo that “China needs its own OpenAI.” The Chinese entrepreneur is now inching closer to his dream as his nascent startup Baichuan Intelligence rolled out its next-generation large language model Baichuan-13B today.

Baichuan is being touted as one of China’s most promising LLM developers, thanks to its founder’s storied past as a computer science prodigy from Tsinghua University and founding the search engine provider Sogou, which was later acquired by Tencent.

Wang stepped down from Sogou in late 2021. As ChatGPT took the world by storm, the entrepreneur launched Baichuan in April and quickly pocketed $50 million in financing from a group of angel investors.

Like other homegrown LLMs of China, Baichuan, a 13 billion-parameter model based on the Transformer architecture (which also undergirds GPT), is trained on Chinese and English data. (Parameters refer to variables that the model uses to generate and analyze text.) The model is open-source and optimized for commercial application, according to its GitHub page.

Baichuan-13 is trained on 1.4 trillion tokens. In comparison, Meta’s LLaMa uses 1 trillion tokens in its 13 billion-parameter model. Wang previously said in an interview that his startup was on track to release a large-scale model comparable to OpenAI’s GPT-3.5 by the end of this year.

Having started only three months ago, Baichuan has already achieved a notable speed of development. By the end of April, the team had grown to 50 people, and in June, it rolled out its first LLM, the pre-training model Baichuan-7B which boasts 7 billion parameters.

Now, the foundational model Baichuan-13B is available for free to academics and developers who have received official approval to use it for commercial purposes. Importantly, in the age of U.S. AI chip sanctions on China, the model offers variations that can run on consumer-grade hardware, including Nvidia’s 3090 graphic cards.

Other Chinese firms that have invested heavily in large language models include the search engine giant Baidu; Zhipu.ai, a spinoff of Tsinghua University led by Professor Tang Jie; as well as the research institute IDEA led by Harry Shum, who co-founded Microsoft Research Asia.

China’s large language models are rapidly emerging as the country prepares to implement some of the world’s most stringent AI regulations. As reported by the Financial Times, China is expected to draw up regulations for generative AI with a particular focus on content, indicating stepped-up control than the rules introduced in April. Companies may also need to obtain a license before launching large language models, which could slow down China’s efforts to compete with the U.S. in the nascent industry.

Advertisement Find your dream job

Trending