Polish Whisper

Polish Whisper

At bards.ai, we fine-tuned OpenAI's Whisper model specifically for the Polish language as part of Hugging Face's global Whisper Fine-Tuning Sprint.

Our model took first place in the Polish category, standing out for its exceptional transcription accuracy and robustness across diverse Polish audio sources.

We focused on minimizing word error rate while preserving the natural flow of spoken Polish, ensuring the model performs well across interviews, podcasts, casual speech, and formal recordings. The result is a state-of-the-art Polish ASR model ready for real-world use in transcription, accessibility, and voice-driven applications.

Performance Metrics

MetricValue
Loss0.3684
WER7.2802%

Training Params

ParameterValue
Learning rate1e-05
Train batch size8
Eval batch size4
Seed42
Gradient accumulation steps8
Total train batch size64
OptimizerAdam (betas=(0.9, 0.999), epsilon=1e-08)
LR scheduler typelinear
LR scheduler warmup steps500
Training steps2100
Mixed precision trainingNative AMP

FAQs

Looking to integrate AI into your
product or project?

Get a Free consultation with our AI experts.

Book a free consultation