Open Source

Models we built.
Open on Hugging Face.

We build models for paying customers, then we release them. 16+ models, 80K+ monthly downloads. Maintained, not abandoned.

What people are actually downloading.

  • 23+

    Open models on Hugging Face
  • 2.7M

    Lifetime model downloads
  • 8+

    Languages covered

// Models

Grouped by what they do.

Each one started as work for a paying customer. We released the ones that turned out to be useful to other teams. The popular ones we keep updating.

Financial Sentiment

8 models

Language-specific models for detecting sentiment in financial news and market commentary.

  • finance-sentiment-ja-base

    ↓ 1.9M
  • finance-sentiment-zh-base

    ↓ 49K
  • finance-sentiment-fr-base

    ↓ 35K
View all 8 models

Polish Whisper

5 models

Whisper ASR models fine-tuned for Polish - winners of Hugging Face's Whisper Fine-Tuning Sprint.

  • whisper-large-v2-pl-v2

    ↓ 14K
  • whisper-small-pl

    ↓ 8.5K
  • whisper-medium-pl

    ↓ 1.9K
View all 5 models

Jaskier 7B LLM

5 models

A top-ranked ≀7B open LLM family fine-tuned with Direct Preference Optimization.

  • jaskier-7b-dpo

    ↓ 30K
  • jaskier-7b-dpo-v5.6

    ↓ 25K
  • jaskier-7b-dpo-v6.1

    ↓ 17K
View all 5 models

// Featured project

Jaxpot - JAX-native self-play RL.

Self-play RL training in JAX. We built it to train an agent for Dark Hex (a hidden-information variant of Hex). The pieces - AlphaZero-style policy/value, MCTS, vectorized environments, Hydra configs - work for any two-player adversarial game.

  • AlphaZero-style training loop
  • Vectorized JAX environments
  • MCTS + neural policies
  • Snapshot leagues & ELO
  • Hydra-configured experiments
  • Tic-tac-toe β†’ Dark Hex
Jaxpot - JAX-based self-play RL training lab for hidden games

Recent releases

Our latest open-source releases.

View all releases
  • Mar 16, 2026

    eu-pii-anonimization-multilang

    Multilingual PII detection across all official EU languages. Token-level redaction.

    Token Classification
    427
  • Feb 29, 2024

    jaskier-7b-dpo-v6-GGUF

    Quantized GGUF build of Jaskier 7B v6 for local inference with llama.cpp.

    Text Generation
    225
  • Feb 20, 2024

    jaskier-7b-dpo-v6.1

    Truthy DPO fine-tune of Jaskier 7B with improved factuality and instruction-following.

    Text Generation
    17K
  • Feb 16, 2024

    jaskier-7b-dpo-v5.6

    Math-focused DPO checkpoint of Jaskier 7B with stronger reasoning on numeric tasks.

    Text Generation
    25K

Why we release this stuff.

  • Survived a paying customer

    Each model went through a real engagement first. The eval suite, the edge cases, and the latency target were set by the customer - not by us.

  • Maintained, not abandoned

    The popular ones get updates. Bug reports get answered. The models we don't maintain are tagged that way so you know which are which.

  • Releasing pays off

    80K+ monthly downloads means people who use our work end up hiring us. Open is the cheapest credibility we know how to build.