tensopolis

14 models • 15 total models in database
Sort by:

virtuoso-lite-tensopolis-v1

> [!TIP] > This model is a merge of arcee-ai/Virtuoso-Lite, please refer to the base model for more information about license, prompt format, etc.

NaNK
llama
12
0

mistral-small-r1-tensopolis

This model is a reasoning fine-tune of unsloth/mistral-small-24b-instruct-2501-unsloth-bnb-4bit. Trained in 1xA100 for about 100 hours. Please refer to the base model and dataset for more information about license, prompt format, etc. Base model: mistralai/Mistral-Small-24B-Instruct-2501 This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.

NaNK
license:apache-2.0
6
1

qwen2.5-7b-tensopolis-v1

> [!TIP] > This model is a merge of Qwen/Qwen2.5-7B-Instruct, please refer to the base model for more information about license, prompt format, etc.

NaNK
license:apache-2.0
6
0

falcon3-10b-tensopolis-v2

> [!TIP] > This model is a merge of tiiuae/Falcon3-10B-Instruct, please refer to the base model for more information about license, prompt format, etc.

NaNK
llama
6
0

virtuoso-small-tensopolis-v2

> [!TIP] > This model is a merge of arcee-ai/Virtuoso-Small, please refer to the base model for more information about license, prompt format, etc.

NaNK
license:apache-2.0
5
2

lamarckvergence-14b-tensopolis-v1

> [!TIP] > This model is a merge of suayptalha/Lamarckvergence-14B, please refer to the base model for more information about license, prompt format, etc.

NaNK
license:apache-2.0
4
2

virtuoso-small-tensopolis-v1

> [!TIP] > This model is a merge of arcee-ai/Virtuoso-Small, please refer to the base model for more information about license, prompt format, etc.

NaNK
license:apache-2.0
4
0

qwen2.5-14b-tensopolis-v1

> [!TIP] > This model is a merge of Qwen/Qwen2.5-14B-Instruct, please refer to the base model for more information about license, prompt format, etc.

NaNK
license:apache-2.0
3
1

phi-4-tensopolis-v1

> [!TIP] > This model is a merge of microsoft/phi-4, please refer to the base model for more information about license, prompt format, etc.

NaNK
llama
3
0

virtuoso-small-v2-tensopolis-v1

> [!TIP] > This model is a merge of arcee-ai/Virtuoso-Small-v2, please refer to the base model for more information about license, prompt format, etc.

NaNK
license:apache-2.0
2
1

virtuoso-lite-tensopolis-v2

> [!TIP] > This model is a merge of arcee-ai/Virtuoso-Lite, please refer to the base model for more information about license, prompt format, etc.

NaNK
llama
2
0

qwen2.5-7b-tensopolis-v2

> [!TIP] > This model is a merge of Qwen/Qwen2.5-7B-Instruct, please refer to the base model for more information about license, prompt format, etc.

NaNK
license:apache-2.0
2
0

mistral-small-2501-tensopolis-v1

> [!TIP] > This model is a merge of mistralai/Mistral-Small-24B-Instruct-2501, please refer to the base model for more information about license, prompt format, etc. Base model: mistralai/Mistral-Small-24B-Instruct-2501

NaNK
license:apache-2.0
1
0

qwen2.5-3b-or1-tensopolis

This model is a reasoning fine-tune of unsloth/Qwen2.5-3B-Instruct. Trained in 1xA100 for about 50 hours. Please refer to the base model and dataset for more information about license, prompt format, etc. This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.

NaNK
0
2