merve
license-plate-detr-dinov3
rtdetr_v2_r50vd-mobile-ui-design
sam2-hiera-base-plus
sam2-hiera-large
yolos-small-license-plates
flux-lego-lora-dreambooth
SmolVLM2-2.2B-DocVQA
sam2-hiera-small
my_awesome_food_model
beans-vit-224
lego-sdxl-dora
lego-sdxl-dora-3
kosmos-2.5-ft
Kosmos-2.5 fine-tuned on grounded OCR (OCR with bounding boxes), find the script here: (GH, HF)
lego_LoRA
lego-lora
sam2-hiera-tiny
chatgpt-prompt-generator-v12
PaddleOCR-VL-1.5-hf
PaddleOCR-VL-hf
Isaac-0.1
lego-dreambooth-sdxl
chatgpt-prompts-bart-long
emoji-dreambooth-trained-xl
vit-mobilenet-beans-224
gemma-7b-8bit
vq-vae
trained-flux-lora-lego
resnet-mobilenet-beans-5
paligemma_vqav2
sam-finetuned
Mistral-7B-Instruct-v0.2
SmolVLM2-500M-Video-Instruct-video-feedback
paligemma2-3b-vqav2
blip2-flan-t5-xxl
peft-copy-test
detr-resnet-50-onnx
VeCLIP-b16-100m
SmolVLM2-500M-Video-Instruct-videofeedback
chatgpt-prompts-bart
pokemon-classifier
dreambooth_bioshock
orb_diffusiondb_controlnet
turkish-rte
musicgen-small
gemma-7b-it-8bit
VeCLIP-b16-3m
colpali_ufo
This model is a fine-tuned version of vidore/colpali-v1.2-hf on an unknown dataset. The following hyperparameters were used during training: - learningrate: 5e-05 - trainbatchsize: 4 - evalbatchsize: 8 - seed: 42 - gradientaccumulationsteps: 4 - totaltrainbatchsize: 16 - optimizer: Use adamwtorch with betas=(0.9,0.999) and epsilon=1e-08 and optimizerargs=No additional optimizer arguments - lrschedulertype: linear - lrschedulerwarmupsteps: 100 - numepochs: 1 - PEFT 0.11.1 - Transformers 4.48.0.dev0 - Pytorch 2.5.1+cu121 - Datasets 2.21.0 - Tokenizers 0.21.0
smol-vision
Smol Vision 🐣 Recipes for shrinking, optimizing, customizing cutting edge vision and multimodal AI models. Original GH repository is here migrated to Hugging Face since notebooks there aren't rendered 🥲 Latest examples 👇🏻 - Fine-tune ColPali for Multimodal RAG - Fine-tune Gemma-3n for all modalities (audio-text-image) - Any-to-Any (Video) RAG with OmniEmbed and Qwen Note: The script and notebook are updated to fix few issues related to QLoRA! | | Notebook | Description | |------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------| | Quantization/ONNX | Faster and Smaller Zero-shot Object Detection with Optimum | Quantize the state-of-the-art zero-shot object detection model OWLv2 using Optimum ONNXRuntime tools. | | VLM Fine-tuning | Fine-tune PaliGemma | Fine-tune state-of-the-art vision language backbone PaliGemma using transformers. | | Intro to Optimum/ORT | Optimizing DETR with 🤗 Optimum | A soft introduction to exporting vision models to ONNX and quantizing them. | | Model Shrinking | Knowledge Distillation for Computer Vision | Knowledge distillation for image classification. | | Quantization | Fit in vision models using Quanto | Fit in vision models to smaller hardware using quanto | | Speed-up | Faster foundation models with torch.compile | Improving latency for foundation models using `torch.compile` | | VLM Fine-tuning | Fine-tune Florence-2 | Fine-tune Florence-2 on DocVQA dataset | | VLM Fine-tuning | QLoRA/Fine-tune IDEFICS3 or SmolVLM on VQAv2 | QLoRA/Full Fine-tune IDEFICS3 or SmolVLM on VQAv2 dataset | | VLM Fine-tuning (Script) | QLoRA Fine-tune IDEFICS3 on VQAv2 | QLoRA/Full Fine-tune IDEFICS3 or SmolVLM on VQAv2 dataset | | Multimodal RAG | Multimodal RAG using ColPali and Qwen2-VL | Learn to retrieve documents and pipeline to RAG without hefty document processing using ColPali through Byaldi and do the generation with Qwen2-VL | | Multimodal Retriever Fine-tuning | Fine-tune ColPali for Multimodal RAG | Learn to apply contrastive fine-tuning on ColPali to customize it for your own multimodal document RAG use case | | VLM Fine-tuning | Fine-tune Gemma-3n for all modalities (audio-text-image) | Fine-tune Gemma-3n model to handle any modality: audio, text, and image. | | Multimodal RAG | Any-to-Any (Video) RAG with OmniEmbed and Qwen | Do retrieval and generation across modalities (including video) using OmniEmbed and Qwen. | | Speed-up/Memory Optimization | Vision language model serving using TGI (SOON) | Explore speed-ups and memory improvements for vision-language model serving with text-generation inference | | Quantization/Optimum/ORT | All levels of quantization and graph optimizations for Image Segmentation using Optimum (SOON) | End-to-end model optimization using Optimum |
yolov9
idefics3llama-vqav2
gemma-3n-finevideo
This model is a fine-tuned version of google/gemma-3n-E2B-it. It has been trained using TRL. - TRL: 0.19.1 - Transformers: 4.53.2 - Pytorch: 2.6.0+cu124 - Datasets: 4.0.0 - Tokenizers: 0.21.2