Introducing Hex-1: A Fully Open-Source LLM for Indic Languages

May 6, 2025 | By Bud Ecosystem

India, being one of the most linguistically diverse nations in the world, faces a major roadblock in harnessing the full potential of Generative AI. With only about 10% of the population fluent in English, the remaining 90% are effectively left behind—unable to engage with GenAI tools that are predominantly built for English-speaking users.

Most leading language models today are trained using the English language, offering little to no support for Indian languages. As a result, the depth and richness of India’s linguistic and cultural heritage are being overlooked by this global AI wave—leaving billions underserved and underrepresented. To address this gap, we need language models that are;

  • Proficient in Indic languages
  • Open-source, making it available to researchers, developers, and the public
  • Offers a commercial license, allowing businesses to freely build applications, tools, and services without restrictive usage terms

Hex1: Indic LLM Built for India

Hex1 is a 4-billion parameter language model specifically optimized for Indian languages. It is designed to bridge the linguistic AI gap in India by enabling developers to build intelligent systems that understand and respond in native Indian languages. In its first release, Hex1 supports five major Indian languages, including Hindi, Kannada, Telugu, Tamil and Malayalam.  Future versions of the model are set to expand support to more languages, broadening its usability across the Indian subcontinent.

When benchmarked against leading models like Gemma-2B, LLaMA-3.2-3B, and Sarvam-1, Hex1 delivers best-in-class performance in all five supported languages for MMLU benchmark. This makes it one of the most capable models currently available for Indic language tasks.

Hex1 is released under an open-source license that includes commercial usage rights, a rare and valuable combination. This means that anyone—from independent developers to startups and large enterprises—can freely use the model to build products tailored to the Indian market.

The Vision Behind Hex

Hex1 is just the beginning. It is the first model in the Hex series of LLMs dedicated to Indic languages. The name Hex draws inspiration from the hexagram, a six-pointed geometric figure that symbolizes cultural symmetry and unity in diversity—perfectly capturing the essence of India’s multilingual identity. By open sourcing Hex1, we aims to empower a new generation of AI models that are rooted in India’s linguistic and cultural realities, helping ensure that the GenAI revolution truly reaches every corner of the country.

Appendix

Performance of Hex-1 Across Indic Languages and Evaluation Benchmarks

HEX-1HellaswagARC-cARC-eMMLUBoolQ
Hindi47.8536.6852.1446.7357.61
Tamil49.4538.6553.4544.7145.87
Telugu50.8437.9653.3646.8551.89
Kannada52.1638.3153.1146.3852.32
Malayalam46.3229.6040.8643.6346.69

Performance comparison of Hex-1 with different models on MMLU dataset

BenchmarkGemma-2-2BLlama-3.2-3BLlama-3.1-8BSarvam-1Hex1
mmlu_hi32.3537.4444.5845.5846.73
mmlu_ta30.8232.1437.5043.7944.71
mmlu_te29.2033.1537.4344.4346.85
mmlu_kn29.2932.9037.2244.5046.38
mmlu_ml30.7133.0438.6044.2543.63

Performance comparison of Hex-1 with different models on ARC-C dataset

BenchmarkGemma-2-2BLlama-3.2-3BLlama-3.1-8BSarvam-1Hex1
arcc_hi37.5749.1356.1760.0036.68
arcc_ta32.7834.744.7857.0438.65
arcc_te3034.0943.0459.3937.96
arcc_kn29.2236.4344.757.0438.31
arcc_ml29.9133.2246.7858.9629.60
Bud Ecosystem

Our vision is to simplify intelligence—starting with understanding and defining what intelligence is, and extending to simplifying complex models and their underlying infrastructure.

Related Blogs

I Built BlazeText — It’s 10X Faster Than HuggingFace’s Tokenizer
I Built BlazeText — It’s 10X Faster Than HuggingFace’s Tokenizer

A few weeks ago, while working on implementing a guardrail engine, I found myself staring at a performance graph that didn’t make any sense. Guardrail actions, like input sanitization, policy enforcement, hallucination checks, bias mitigation, audit logging: each layer adds complexity and latency. Left unchecked, those extra hops can nudge your p95 from tolerable to […]

Open Source Update : Bud Symbolic AI
Open Source Update : Bud Symbolic AI

This week we published a new open-source project — Bud Symbolic AI, an open-source framework designed to bridge traditional pattern matching (like regex and Cucumber expressions) with semantic understanding driven by embeddings. It delivers a unified expression framework that intelligently handles single words, multi‑word phrases, dynamic parameters, and context‑aware validation by leveraging FAISS for efficient […]

What’s New in LLM Inference Optimization: Recent Advances and Techniques
What’s New in LLM Inference Optimization: Recent Advances and Techniques

Large Language Models (LLMs) are resource-intensive. Open-source models like LLaMA 2, Mistral 7B, Falcon 40B, and others offer flexibility for deployment on cloud, edge, or on-premise setups. However, for cost-effective deployments, inference optimization is a necessity. This report surveys recent inference optimization methods and best practices, focusing on open-source LLMs. We cover techniques to reduce […]

A Survey of parallelism strategies that can deliver better efficiency for your GenAI deployments.
A Survey of parallelism strategies that can deliver better efficiency for your GenAI deployments.

Generative AI unlocks incredible capabilities, but it doesn’t come cheap. Training and deploying large models like LLMs or diffusion models demand massive compute, making the total cost of ownership (TCO) a serious concern for teams building production-grade systems. To make GenAI cost-effective and scalable, you need to squeeze out every bit of performance from your […]