whisper-base-french-lora

15
1
license:apache-2.0
by
qfuxa
Audio Model
OTHER
0B params
New
15 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
Unknown
Mobile
Laptop
Server
Quick Summary

AI model with specialized capabilities.

Code Examples

LoRA Configurationpython
LoraConfig(
    r=16,
    lora_alpha=32,
    lora_dropout=0.05,
    bias="none",
    target_modules=["q_proj", "k_proj", "v_proj", "out_proj"]
)
Usagebash
pip install whisperlivekit

# Start the server with French LoRA (auto-downloads from HuggingFace)
wlk --model base --language fr --lora-path qfuxa/whisper-base-french-lora
With Transformers + PEFTpythontransformers
from transformers import WhisperForConditionalGeneration, WhisperProcessor
from peft import PeftModel
import torch

# Load base model
base_model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-base")
processor = WhisperProcessor.from_pretrained("openai/whisper-base", language="fr", task="transcribe")

# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "QuentinFuxa/whisper-base-french-lora")
model = model.merge_and_unload()  # Optional: merge for faster inference

# Transcribe
audio = processor.feature_extractor(audio_array, sampling_rate=16000, return_tensors="pt")
generated_ids = model.generate(audio.input_features, language="fr", task="transcribe")
transcription = processor.tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
With Native Whisper (WhisperLiveKit Backend)python
from whisperlivekit.whisper import load_model

# Load Whisper base with French LoRA adapter
model = load_model(
    "base",
    lora_path="path/to/whisper-base-french-lora"
)

# Transcribe
result = model.transcribe(audio, language="fr")
Citationbibtex
@misc{whisper-base-french-lora,
  author = {Quentin Fuxa},
  title = {Whisper Base French LoRA},
  year = {2025},
  publisher = {Hugging Face},
  url = {https://huggingface.co/QuentinFuxa/whisper-base-french-lora}
}

@misc{whisperlivekit,
  author = {Quentin Fuxa},
  title = {WhisperLiveKit: Ultra-low-latency speech-to-text},
  year = {2025},
  publisher = {GitHub},
  url = {https://github.com/QuentinFuxa/WhisperLiveKit}
}

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.