olabs-ai

23 models • 1 total models in database
Sort by:

TFFT-20241101_213900-Llama-3.2-1B

NaNK
llama
77
0

qLeap_instruct_v02

- Developed by: olabs-ai - License: apache-2.0 - Finetuned from model : unsloth/Llama-3.2-1B-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

NaNK
llama
23
0

unsloth-Llama-3.2-1B-Instruct-bnb-4bit-GGUF

NaNK
llama
19
0

qLeap_v06_instruct

NaNK
llama
18
0

qLeap_base_v01

NaNK
llama
16
0

unsloth-cpt-hindi-v01

NaNK
llama
15
0

qLeap_v05_instruct

NaNK
llama
10
0

TFFT-20241101_221234-Llama-3.2-1B

NaNK
llama
8
0

qLeap_model_v0_8bit_Q8_1730963323

NaNK
license:apache-2.0
7
0

qLeap_v04

- Developed by: olabs-ai - License: apache-2.0 - Finetuned from model : unsloth/Llama-3.2-1B-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

NaNK
llama
6
0

qLeap_v07_instruct

NaNK
llama
6
0

qLeap_instruct_v04

- Developed by: olabs-ai - License: apache-2.0 - Finetuned from model : unsloth/Llama-3.2-1B-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

NaNK
llama
6
0

TFFT-20241102_123621-Llama-3.2-1B-Instruct

NaNK
llama
6
0

qLeap_model_v0_16bit_GGUF_1730963323

NaNK
license:apache-2.0
6
0

unsloth-Llama-3.2-1B-bnb-4bit

NaNK
llama
5
0

qLeap_v04_instruct

- Developed by: olabs-ai - License: apache-2.0 - Finetuned from model : unsloth/Llama-3.2-1B-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

NaNK
llama
4
0

qLeap_base_v02

llama
4
0

qLeap_model_v0_q4_k_m_16bit

NaNK
llama
3
0

qLeap_model_v0_q5_k_m_16bit

NaNK
llama
2
0

rohitx11

1
0

qLeap_instruct_v3

llama
1
0

reflection_model

--- language: en tags: - text-generation - causal-lm - fine-tuning - unsupervised --- The `olabs-ai/reflectionmodel` is a fine-tuned language model based on Meta-Llama-3.1-8B-Instruct. It has been further fine-tuned using LoRA (Low-Rank Adaptation) for improved performance in specific tasks. This model is designed for text generation and can be used for various applications like conversational agents, content creation, and more. - Base Model: Meta-Llama-3.1-8B-Instruct - Fine-Tuning Method: LoRA - Architecture: LlamaForCausalLM - Number of Parameters: 8 Billion (Base Model) - Training Data: [Details about the training data used for fine-tuning, if available] To use this model, you need to have the `transformers` and `unsloth` libraries installed. You can load the model and tokenizer as follows: ```python from transformers import AutoConfig, AutoModel, AutoTokenizer from unsloth import FastLanguageModel Load base model configuration basemodelname = "olabs-ai/Meta-Llama-3.1-8B-Instruct" baseconfig = AutoConfig.frompretrained(basemodelname) basemodel = AutoModel.frompretrained(basemodelname, config=baseconfig) tokenizer = AutoTokenizer.frompretrained(basemodelname) Load LoRA adapter adapterconfigpath = "pathtoyouradapterconfig.json" adapterweightspath = "pathtoyouradapterweights" Use FastLanguageModel to apply LoRA adapter model = FastLanguageModel.frompretrained( modelname=basemodelname, adapterweights=adapterweightspath, config=adapterconfigpath ) Set inference mode for LoRA FastLanguageModel.forinference(model) Prepare inputs customprompt = "What is a famous tall tower in Paris?" inputs = tokenizer([customprompt], returntensors="pt").to("cuda") from transformers import TextStreamer textstreamer = TextStreamer(tokenizer) Generate outputs outputs = model.generate(inputs, streamer=textstreamer, maxnewtokens=1000)

license:apache-2.0
0
1

meta-llama-3.1-8b-o1

NaNK
llama
0
1