Foxconn unveils first large language model
Published by Global Banking & Finance Review®
Posted on March 10, 2025
2 min readLast updated: January 24, 2026
Published by Global Banking & Finance Review®
Posted on March 10, 2025
2 min readLast updated: January 24, 2026
Foxconn unveils FoxBrain, its first large language model, to enhance manufacturing and supply chain management, leveraging Nvidia's GPUs and Meta's architecture.
TAIPEI (Reuters) - Taiwan’s Foxconn said on Monday it has launched its first large language model and plans to use the technology to improve manufacturing and supply chain management.
The model, named “FoxBrain,” was trained using 120 of Nvidia’s H100 GPUs and completed in about four weeks, the world's largest contract electronics manufacturer said in a statement.
The company, which assembles iPhones for Apple and also produces Nvidia's artificial intelligence servers, said the model is based on Meta’s Llama 3.1 architecture.
It is Taiwan's first large language model with reasoning capabilities that is optimised for traditional Chinese and Taiwanese language styles, it said.
Foxconn said that though there was a slight performance gap compared with China's DeepSeek's distillation model, its overall performance is very close to world-class standards.
Initially designed for internal applications, FoxBrain covers data analysis, decision support, document collaboration, mathematics, reasoning and problem-solving, and code generation.
Foxconn said it plans to collaborate with technology partners to expand the model’s applications, share its open-source information, and promote AI in manufacturing, supply chain management, and intelligent decision-making.
Nvidia provided support through its Taiwan-based supercomputer “Taipei-1” and offered technical consulting during the model’s training, Foxconn said.
Taipei-1, the largest supercomputer in Taiwan, is owned and operated by Nvidia in Kaohsiung, a southern city on the island.
Foxconn will announce further details about the model during Nvidia’s GTC developer conference in mid-March.
(Reporting by Wen-Yee Lee; Editing by Shri Navaratnam)
The model is named 'FoxBrain' and is designed to improve manufacturing and supply chain management.
FoxBrain was trained using 120 of Nvidia's H100 GPUs and completed in about four weeks.
FoxBrain covers data analysis, decision support, document collaboration, mathematics, reasoning, problem-solving, and code generation.
Foxconn plans to collaborate with technology partners to expand the model’s applications and promote AI in manufacturing and supply chain management.
Further details about the model will be announced during Nvidia’s GTC developer conference in mid-March.
Explore more articles in the Finance category




