akhaliq
frame-interpolation-film-style
gemma3-270m-it-gradio-lora
This model is a fine-tuned version of google/gemma-3-270m-it. It has been trained using TRL. - PEFT 0.17.1 - TRL: 0.21.0 - Transformers: 4.55.4 - Pytorch: 2.8.0+cu126 - Datasets: 4.0.0 - Tokenizers: 0.21.4
gemma-3-270m-gradio-coder-adapter
This model is a fine-tuned version of google/gemma-3-270m-it. It has been trained using TRL. - PEFT 0.17.1 - TRL: 0.21.0 - Transformers: 4.55.4 - Pytorch: 2.8.0+cu126 - Datasets: 4.0.0 - Tokenizers: 0.21.4
MyGemmaGradioCoder
GemmaGradio
This model is a fine-tuned version of google/gemma-3-270m-it. It has been trained using TRL. - TRL: 0.21.0 - Transformers: 4.55.2 - Pytorch: 2.8.0+cu126 - Datasets: 4.0.0 - Tokenizers: 0.21.4
lama
Sora 2
sora 2 inference provider integration, see docs: https://huggingface.co/docs/inference-providers/en/index
AnimeGANv2-ONNX
Sora 2 Image To Video
sora 2 image to video inference provider integration, see docs: https://huggingface.co/docs/inference-providers/en/index
Veo3.1 Fast
veo 3.1 inference provider integration, see docs: https://huggingface.co/docs/inference-providers/en/index
Veo3.1 Fast Image To Video
veo 3.1 inference provider integration, see docs: https://huggingface.co/docs/inference-providers/en/index
GPEN-BFR-512
ArcaneGANv0.4
AnimeGANv2-pytorch
RetinaFace-R50
Gpt 5
GPT-5 inference provider integration, see docs: https://huggingface.co/docs/inference-providers/en/index