georgesung
5 models • 2 total models in database
Sort by:
llama2_7b_chat_uncensored
NaNK
llama
1,708
398
llama3_8b_chat_uncensored
NaNK
llama
448
17
Open Llama 7b Qlora Uncensored
Overview Fine-tuned OpenLLaMA-7B with an uncensored/unfiltered Wizard-Vicuna conversation dataset (originally from ehartford/wizardvicuna70kunfiltered). Used QLoRA for fine-tuning. Trained for one epoch on a 24GB GPU (NVIDIA A10G) instance, took ~18 hours to train. Prompt style The model was trained with the following prompt style: Training code Code used to train the model is available here. Demo For a Gradio chat application using this model, clone this HuggingFace Space and run it on top of a GPU instance. The basic T4 GPU instance will work. Blog post Since this was my first time fine-tuning an LLM, I also wrote an accompanying blog post about how I performed the training :)
NaNK
llama
216
22
llama2_7b_openorca_35k
NaNK
llama
3
2
flux.1-dev-abliterated-merged
—
0
34