hustvl

34 models • 2 total models in database
Sort by:

vitmatte-small-composition-1k

ViTMatte model trained on Composition-1k. It was introduced in the paper ViTMatte: Boosting Image Matting with Pretrained Plain Vision Transformers by Yao et al. and first released in this repository. Disclaimer: The team releasing ViTMatte did not write a model card for this model so this model card has been written by the Hugging Face team. ViTMatte is a simple approach to image matting, the task of accurately estimating the foreground object in an image. The model consists of a Vision Transformer (ViT) with a lightweight head on top. ViTMatte high-level overview. Taken from the original paper. You can use the raw model for image matting. See the model hub to look for other fine-tuned versions that may interest you.

4,119,715
46

yolos-small

--- license: apache-2.0 tags: - object-detection - vision datasets: - coco widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/savanna.jpg example_title: Savanna - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/football-match.jpg example_title: Football Match - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/airport.jpg example_title: Airport ---

license:apache-2.0
780,829
70

yolos-tiny

YOLOS model fine-tuned on COCO 2017 object detection (118k annotated images). It was introduced in the paper You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection by Fang et al. and first released in this repository. Disclaimer: The team releasing YOLOS did not write a model card for this model so this model card has been written by the Hugging Face team. YOLOS is a Vision Transformer (ViT) trained using the DETR loss. Despite its simplicity, a base-sized YOLOS model is able to achieve 42 AP on COCO validation 2017 (similar to DETR and more complex frameworks such as Faster R-CNN). The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model. You can use the raw model for object detection. See the model hub to look for all available YOLOS models. Currently, both the feature extractor and model support PyTorch. The YOLOS model was pre-trained on ImageNet-1k and fine-tuned on COCO 2017 object detection, a dataset consisting of 118k/5k annotated images for training/validation respectively. The model was pre-trained for 300 epochs on ImageNet-1k and fine-tuned for 300 epochs on COCO. This model achieves an AP (average precision) of 28.7 on COCO 2017 validation. For more details regarding evaluation results, we refer to the original paper.

license:apache-2.0
143,859
271

vitmatte-base-composition-1k

license:apache-2.0
9,841
11

yolos-base

license:apache-2.0
2,367
26

vitmatte-small-distinctions-646

license:apache-2.0
2,263
1

InfiniteVL

license:apache-2.0
2,122
3

yolos-small-300

license:apache-2.0
350
6

MaTVLM_0_25_Mamba2

license:mit
114
1

DiffusionVL-Qwen2.5VL-7B

NaNK
license:apache-2.0
107
6

vitmatte-base-distinctions-646

license:apache-2.0
87
4

DiffusionVL-Qwen2.5VL-3B

NaNK
license:apache-2.0
56
4

yolos-small-dwr

license:apache-2.0
45
4

InfiniteVL-LongSFT

license:apache-2.0
35
2

DiffusionVL-Qwen2.5-7B

NaNK
license:apache-2.0
17
1

mmMamba-linear

Introduction We propose mmMamba, the first decoder-only multimodal state space model achieved through quadratic to linear distillation using moderate academic computing resources. Unlike existing linear-complexity encoder-based multimodal large language models (MLLMs), mmMamba eliminates the need for separate vision encoders and underperforming pre-trained RNN-based LLMs. Through our seeding strategy and three-stage progressive distillation recipe, mmMamba effectively transfers knowledge from quadratic-complexity decoder-only pre-trained MLLMs while preserving multimodal capabilities. Additionally, mmMamba introduces flexible hybrid architectures that strategically combine Transformer and Mamba layers, enabling customizable trade-offs between computational efficiency and model performance. Distilled from the decoder-only HoVLE-2.6B, our pure Mamba-2-based mmMamba-linear achieves performance competitive with existing linear and quadratic-complexity VLMs, including those with 2x larger parameter size like EVE-7B. The hybrid variant, mmMamba-hybrid, further enhances performance across all benchmarks, approaching the capabilities of the teacher model HoVLE. In long-context scenarios with 103K tokens, mmMamba-linear demonstrates remarkable efficiency gains with a 20.6× speedup and 75.8% GPU memory reduction compared to HoVLE, while mmMamba-hybrid achieves a 13.5× speedup and 60.2% memory savings. Seeding strategy and three-stage distillation pipeline of mmMamba. We provide example code to run mmMamba inference using the Transformers library. Below are the primary dependencies required for model inference: - torch==2.1.0 - torchvision==0.16.0 - torchaudio==2.1.0 - transformers==4.37.2 - peft==0.10.0 - triton==3.2.0 - mambassm - causalconv1d - flashattn (Please note that you need to select and download the corresponding .whl file based on your environment.) - peft - omegaconf - rich - accelerate - sentencepiece - decord - seaborn

license:mit
5
4

LKCell-L

license:apache-2.0
2
0

mmMamba-hybrid

license:mit
1
1

Vim-tiny

license:apache-2.0
0
21

Vim-small-midclstok

license:apache-2.0
0
12

Vim-tiny-midclstok

license:apache-2.0
0
7

OmniMamba

license:mit
0
7

PixelHacker

license:mit
0
7

vavae-imagenet256-f16d32-dinov2

license:mit
0
5

lightningdit-xl-imagenet256-800ep

license:mit
0
4

Vim-base-midclstok

license:apache-2.0
0
3

DiffusionDrive

license:apache-2.0
0
3

va-vae-imagenet256-experimental-variants

license:mit
0
3

DiffusionDriveV2

license:mit
0
2

vgt_internvl3_1_6B_sft

NaNK
license:mit
0
1

vgt_qwen25vl_1_6B_sft

NaNK
license:mit
0
1

ViG

license:apache-2.0
0
1

lightningdit-xl-imagenet256-64ep

license:mit
0
1

Turbo-VAED

license:mit
0
1