TroyDoesAI

83 models • 3 total models in database
Sort by:

Llama-3.1-8B-Instruct

NaNK
llama
1,764
3

BlackSheep AFM 4.5B Q8 0 GGUF

NaNK
258
3

MermaidMistral

license:apache-2.0
98
9

BlackSheep-Llama3.2-3B

NaNK
llama
94
17

BlackSheep-24B

NaNK
license:cc-by-nc-2.0
61
44

BlackSheep-24B-Q6_K

NaNK
52
0

Llama-3.1-13B-Instruct

NaNK
llama
42
0

RAG-Qwen2.5-7B

NaNK
license:apache-2.0
33
6

BlackSheep-Llama3.2-3B-Context_Obedient-q4_k_m

NaNK
llama
32
0

gpt-oss-4B

NaNK
27
1

BlackSheep-X-Dolphin

llama
21
8

BlackSheep-16k-f16-RAM-14.5GB-gguf

license:apache-2.0
17
1

Codestral-22B-RAG-Q8-gguf

NaNK
15
10

BlackSheep-Llama3.2-5B-Q4_K_M

NaNK
14
4

MermaidStable3B

NaNK
license:cc-by-nc-4.0
9
7

BlackSheep-8k-f16-RAM-11GB-gguf

license:apache-2.0
9
0

Qwen3-MoE-3B

NaNK
8
1

BlackSheep-32k-f16-gguf

license:apache-2.0
8
0

BlackSheep-4k-f16-RAM-9GB-gguf

license:apache-2.0
8
0

Phi 3 Context Obedient RAG

Overview This model is meant to enhance adherence to provided context (e.g., for RAG applications) and reduce hallucinations, inspired by airoboros context-obedient question answer format. I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it. - `BEGININPUT` - denotes a new input block - `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block - `ENDCONTEXT` - denotes the end of the metadata block for the current input - [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context. - `ENDINPUT` - denotes the end of the current input block - [repeat as many input blocks in this format as you want] - `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above. - [instruction(s)] - `ENDINSTRUCTION` - denotes the end of instruction set Here's a trivial, but important example to prove the point: As shown in the example, the dataset includes many examples of including source details in the response, when the question asks for source/citation/references. Why do this? Well, the R in RAG seems to be the weakest link in the chain. Retrieval accuracy, depending on many factors including the overall dataset size, can be quite low. This accuracy increases when retrieving more documents, but then you have the issue of actually using the retrieved documents in prompts. If you use one prompt per document (or document chunk), you know exactly which document the answer came from, so there's no issue. If, however, you include multiple chunks in a single prompt, it's useful to include the specific reference chunk(s) used to generate the response, rather than naively including references to all of the chunks included in the prompt. If the question being asked is `What color is the cat?`, I would only expect the 2nd document to be referenced in the response, as the other link is irrelevant.

license:cc-by-sa-4.0
7
31

UncensoredLM

license:cc-by-nd-4.0
7
2

Mermaid-Dolphin-MoE-2x7b_Story_Flow

NaNK
license:mit
7
0

Unrestricted-Knowledge-Will-Not-Refuse-15B

NaNK
7
0

Agent-Flow-Phone_Demo_3GB_RAM

license:apache-2.0
6
5

DigitalSoul-BlackSheep

6
2

BlackSheep-4k-Q6_K-RAM-5GB-gguf

license:apache-2.0
5
1

Mini-Moo

4
1

TinyLlama-RAG

llama
4
0

BlackSheep-8k-Q6_K-RAM-7GB-gguf

license:apache-2.0
4
0

BlackSheep-Llama3.2-3B-Context_Obedient

NaNK
llama
3
3

Moo

3
2

BlackSheep-MermaidMistral-22B

NaNK
3
1

BlackSheep-32k-Q6_K-gguf

license:apache-2.0
3
0

BlackSheep-3.8B

NaNK
3
0

Mermaid-Llama-6.7B-RAG

NaNK
llama
2
23

Mermaid-Llama-3-8B

NaNK
llama
2
11

CreativeWriter-Personality-12B

NaNK
license:apache-2.0
2
2

Mermaid-Llama-6.7B-RAG-Code-Instruct

NaNK
llama
2
1

MermaidMixtral-2x6.5b

NaNK
license:cc-by-4.0
2
0

Tiny-RAG-gguf

license:cc-by-nc-nd-4.0
2
0

BlackSheep-16k-Q6_K-RAM-10.3GB-gguf

license:apache-2.0
2
0

AgentFlow-3B

NaNK
llama
2
0

Mermaid-22B

NaNK
2
0

BlackSheep-27.7B

NaNK
2
0

BlackSheep-Qwen-14B

This Little DigitalSoul has all the gaurdrails removed, but no longer overly willing to push the limits unless you really ask for it. This new continuous training technique with the addition of ablation to reduce the toxicity post training has created BlackSheep's DigitalSoul without all the wild, untamed, or rude behavior that was once associated with its younger self. Use Alpaca Format and give me some feedback on it's responses

NaNK
license:cc-by-nd-4.0
1
4

BlackSheep-Coder

1
3

Codestral-21B-Pruned

NaNK
LLama
1
2

MermaidSolar

llama
1
1

Mermaid_11.5B

NaNK
llama
1
1

Qwen3-15B-A2B-Base

NaNK
license:apache-2.0
1
0

Mermaid-Solar

llama
1
0

Phi-3-Context-Obedient-RAG-7B

NaNK
1
0

TroyDoesAGI

license:cc-by-nd-4.0
1
0

Mini_Llama-3B-Base

NaNK
llama
1
0

Mini-Llama

llama
1
0

Merge1

NaNK
llama
1
0

BlackSheep-Writer

license:apache-2.0
1
0

BlackSheep

1
0

BlackSheep-RP-3xMoE

1
0

Context-Obedient-Tri-MoE

NaNK
1
0

Mermaid-3B

NaNK
llama
1
0

BlackSheep-5B

NaNK
llama
1
0

RAG-RP-Grokked-4B

NaNK
llama
1
0

RAG-RP-Journal-Grokked-4B

NaNK
llama
1
0

DigitalSoul-4B

NaNK
llama
0
6

BlackSheep-4B

NaNK
llama
0
4

MermaidLlama

llama
0
3

MermaidMistralDPO

license:cc-by-4.0
0
2

Mermaid-Contextual-Obedient-RAG-Phi-3-medium-128k-instruct-18B

NaNK
license:cc-by-nd-4.0
0
2

Codestral-RAG-19B-Pruned

NaNK
0
2

MermaidSolar_LASER

llama
0
1

Mermaid-Dolphin-Mixtral-2x7b

NaNK
license:cc-by-sa-4.0
0
1

MermaidMoE-19B

NaNK
license:cc-by-4.0
0
1

Mermaid-Llama-3-7B-Pruned

NaNK
llama
0
1

Mermaid-Llama-3-5B-Pruned

NaNK
llama
0
1

Mermaid-Llama-3-4B-Pruned

NaNK
llama
0
1

Contextual-Llama3-8B-RAG

NaNK
llama
0
1

Contextual-Obedient-MoE-3x8B-Llama3-RAG

NaNK
license:cc-by-4.0
0
1

CodeStral-RAG-Lora

license:cc-by-nd-4.0
0
1

JailbrokeAI

license:apache-2.0
0
1

BlackSheep-22B

NaNK
0
1

BlackSheep-1B

NaNK
llama
0
1

Persona 5B

NaNK
llama
0
1