Our latest fine-tuned model in Turkish LLM: Fikri

Our latest fine-tuned model in Turkish LLM: Fikri

We are excited to announce the release of our latest model in Turkish language tasks: Fikri, an 8B parameter instruct-tuned model. Our ongoing mission has been to fine-tune and further pre-train state-of-the-art (SOTA) open-source language models (LLMs) for specific Turkish language tasks, and Fikri is our latest achievement in this endeavor.

Why We Developed Fikri

SOTA LLMs often reflect a narrow worldview, heavily biased by English-language content which constitutes more than 90% of training datasets. We believe this underrepresents our language and cultural nuances. We feel a profound responsibility to develop improved models, benchmarking tools, and development/debugging utilities that accurately reflect local languages and contexts.

Our motivation stems from the needs of our clients who require language-specific and context-aware solutions for a variety of downstream tasks. Whether these tasks are running in private networks or cloud environments, our models are designed to be both efficient and effective. Moreover, we understand the critical importance of data privacy for many organizations. Companies often have stringent requirements about where their data can be processed, especially when handling sensitive information. Running language models on private networks or dedicated cloud environments ensures that confidential data remains secure. Fikri is designed with these privacy concerns in mind, providing companies the confidence to deploy state-of-the-art Turkish language processing capabilities without compromising data security.

Training and Capabilities

Fikri represents a highly steerable, instruction-tuned model, created by fine-tuning the base Llama 3.1 model. We continued pre-training the base model using a high-quality Turkish dataset(~1B token), meticulously cleaned and pre-processed to remove non-Turkish content. This was followed by fine-tuning on a set of 200,000 instructions to diffuse Fikri with exceptional instruction-following capabilities.

Here is a quick demonstration of Fikri's language and context-awareness capabilities:

 User: Merhaba, kimsin sen? 
Assistant: Merhaba, ben Dede Korkut'un torunlarından biriyim. Benim adım Kara Murat. Sen kimsin?

Try Fikri Today

Fikri is now available to the community for experimentation and further development. You can find the model at the Huggingface: Fikri 3.1 8B on Hugging Face.

Additionally, we are releasing a quantized version of Fikri. This reduced-size model is optimized for swift execution on local machines, making it accessible for a range of applications. It can be run using tools such as llama.cpp or ollama. Find the quantized version here: Fikri 3.1 8B Instruct Q4_K_M on Hugging Face.

Looking Forward

We invite you to use Fikri 3.1 8B and join us in exploring and understanding its full potential. Your feedback will be invaluable for further development and refinement.

We are committed to experimenting, developing, and expanding the capabilities of models for low-resource languages, contexts, and domains. We are continuously fine-tuning our models, with our local training rigs operating on a 24/7 basis. Our goal is to open-source as many of our models as possible, enabling the broader community to move forward collectively.

Osman Orhan

Osman Orhan

Software Developer

Related Posts

Enhancing Enterprise Operations with Generative Large Language Models (LLMs)
Software Developer

Osman OrhanFeb 21, 2024

Enterprises today are inundated with data, yet much of its potential remains untapped.

The Impact of AI on Jobs - Fear, Evolution, and Growth
Software Developer

Osman OrhanJul 02, 2024

Explore how recent advancements in AI, such as OpenAI's GPT-4o and Google's Project Astra, are reshaping job markets and technological innovation, echoing historical concerns about job security while promising new opportunities for growth and transformation.

Download a presentation of our recent worksSay Hello

Contact Us