digitalpipelines
6 models • 1 total models in database
Sort by:
llama2_13b_chat_uncensored-GPTQ
NaNK
llama
9
4
llama2_13b_chat_uncensored
NaNK
llama
8
0
llama2_7b_chat_uncensored-GPTQ
NaNK
llama
7
0
llama2_7b_chat_uncensored
Overview Fine-tuned OpenLLaMA-7B with an uncensored/unfiltered Wizard-Vicuna conversation dataset digitalpipelines/wizardvicuna70kuncensored. Used QLoRA for fine-tuning using the process outlined in https://georgesung.github.io/ai/qlora-ift/ - GPTQ quantized model can be found at digitalpipelines/llama27bchatuncensored-GPTQ - GGML 2, 3, 4, 5, 6 and 8-bit quanitized models for CPU+GPU inference of digitalpipelines/llama27bchatuncensored-GGML Prompt style The model was trained with the following prompt style:
NaNK
llama
5
3
llama2_13b_chat_uncensored-GGML
NaNK
llama
0
4
llama2_7b_chat_uncensored-GGML
NaNK
—
0
1