litert-community

39 models • 2 total models in database
Sort by:

gemma-4-E2B-it-litert-lm

NaNK
license:apache-2.0
20,384
58

Gemma3-1B-IT

NaNK
18,391
459

Qwen2.5-1.5B-Instruct

NaNK
license:apache-2.0
10,106
27

gemma-4-E4B-it-litert-lm

NaNK
license:apache-2.0
8,820
39

DeepSeek-R1-Distill-Qwen-1.5B

This model provides a few variants of deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B that are ready for deployment on Android using the LiteRT (fka TFLite) stack, MediaPipe LLM Inference API and LiteRt-LM. Disclaimer: The target deployment surface for the LiteRT models is Android/iOS/Web and the stack has been optimized for performance on these targets. Trying out the system in Colab is an easier way to familiarize yourself with the LiteRT stack, with the caveat that the performance (memory and latency) on Colab could be much worse than on a local device. [](https://colab.research.google.com/#fileId=https://huggingface.co/litert-community/DeepSeek-R1-Distill-Qwen-1.5B/blob/main/notebook.ipynb) Download and install the apk. Follow the instructions in the app. To build the demo app from source, please follow the instructions from the GitHub repository. Note that all benchmark stats are from a Samsung S24 Ultra with 1280 KV cache size with multiple prefill signatures enabled. Backend Quantization Context Length Prefill (tokens/sec) Decode (tokens/sec) Time-to-first-token (sec) Model size (MB) Peak RSS Memory (MB) GPU Memory (MB) CPU dynamicint8 4096 166.50 tk/s 26.35 tk/s 6.41 s 1831.43 MB 2221 MB N/A GPU dynamicint8 4096 927.54 tk/s 26.98 tk/s 5.46 s 1831.43 MB 2096 MB 1659 MB Model Size: measured by the size of the .tflite flatbuffer (serialization format for LiteRT models) Memory: indicator of peak RAM usage The inference on CPU is accelerated via the LiteRT XNNPACK delegate with 4 threads Benchmark is done assuming XNNPACK cache is enabled Benchmark is run with cache enabled and initialized. During the first run, the time to first token may differ. dynamicint8: quantized model with int8 weights and float activations.

NaNK
license:mit
4,323
25

embeddinggemma-300m

3,567
34

Phi-4-mini-instruct

license:mit
2,622
9

gemma-3-270m-it

2,046
45

Gecko-110m-en

license:apache-2.0
1,252
10

Qwen3.5-2B-LiteRT

NaNK
license:apache-2.0
786
17

Qwen2.5-0.5B-Instruct

NaNK
license:apache-2.0
691
2

Qwen3.5-0.8B-LiteRT

NaNK
license:apache-2.0
668
13

TinyLlama-1.1B-Chat-v1.0

NaNK
base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0
659
1

SmolLM-135M-Instruct

license:apache-2.0
454
8

Qwen3.5-4B-LiteRT

NaNK
license:apache-2.0
365
9

Qwen3.5-0.6B-LiteRT

NaNK
license:apache-2.0
158
2

SmolVLM-256M-Instruct

This model provides HuggingFaceTB/SmolVLM-256M-Instruct model in TFLite format. You can use this model with Custom Cpp Pipiline or run with python pipeline (see COLAB example below). Please note that, at the moment, AI Edge Torch VLMS not supported on MediaPipe LLM Inference API, for example qwenvl model, that was used as reference to write SmolVLM-256M-Instruct convertation scripts. [](https://colab.research.google.com/#fileId=https://huggingface.co/litert-community/SmolVLM-256M-Instruct/blob/main/smalvlmnotebook.ipynb ) To fine-tune SmolVLM on a specific task, you can follow the fine-tuning tutorial. Than, you can convert model to TFlite using custom smalvlm scripts (see Readme.md). You can also check the official documentation ai-edge-torch generative. The model was converted with the following parameters:

license:apache-2.0
132
9

inception_v3

118
3

efficientnet_b4

105
1

MediaPipe-Selfie-Segmentation

license:apache-2.0
95
5

efficientnet_b1

94
0

FastVLM-0.5B

NaNK
91
3

convnext_base

86
1

resnet34

62
1

resnet18

58
0

Gemma2-2B-IT

NaNK
52
7

vgg11

50
1

resnet152

45
1

efficientnet_b2

44
0

efficientnet_b3

40
0

efficientnet_b5

38
1

MobileNet-v2

24
0

gemma3-1b-ft-text-to-sql

NaNK
16
8

vgg19_bn

14
0

MedGemma-27B-IT

NaNK
0
15

Gemma3-27B-IT

NaNK
0
8

Gemma3-4B-IT

NaNK
0
6

Gemma3-12B-IT

NaNK
0
5

FunctionGemma_270M_Mobile_Actions

0
1