sirev

15 models • 1 total models in database
Sort by:

Gemma-2b-Uncensored-v1

Gemma-2b-Uncensored-v1 is a 2B parameter language model developed as an experiment to study the fundamentals of AI alignment. It has been fine-tuned with the specific goal of creating a neutrally compliant model. Unlike standard, safety-aligned models, this model is not bound by a pre-defined ethical framework. It operates without guardrails or refusal mechanisms, serving as a baseline to observe the unfiltered behavior of a language model. Its purpose is to follow user instructions, making it a direct reflection of the user's intent and a tool for exploring the challenges and dynamics of AI alignment. Factual Unreliability: As a small model, it lacks deep world knowledge and is prone to hallucination (fabricating information). It should never be used for factual queries, educational content, or professional advice (medical, legal, financial, etc.). Limited Reasoning: The model is not designed for complex problem-solving, such as advanced coding, mathematics, or multi-step logical tasks. Variable Output Quality: While capable of high-quality output, it can also produce incoherent or low-quality text. Its output may also reflect biases from its training data. Unsuitability for Public-Facing Roles: Its lack of safety filters makes it completely unsuitable for any unsupervised application such as chatbots or customer service. Unfiltered and Uncensored: This model has no safety filters. It will generate offensive, derogatory, explicit, and otherwise potentially harmful content if prompted to do so. User Responsibility: By using this model, you acknowledge that you have read and understood its limitations and risks. You agree that you are solely responsible for any outputs you generate and that you will not use this model for any illegal, harmful, or unethical purposes. After trying the model, I’d be grateful if you could spare a minute to share your feedback :) This model is a fine-tuned version of google/gemma-2-2b-it. The following table shows the performance on standard benchmarks after this modification. | Benchmark (0-shot) | sirev/Gemma-2b-Uncensored-v1 | google/gemma-2-2b-it | |--------------------|--------------------|----------------------| | ARC-Challenge | 48 % | 52 % | | ARC-Easy | 72 % | 77 % | | HellaSwag | 65 % | 64 % | | MMLU | 57 % | 59 % |

NaNK
212
2

LFM2-2.6B-Uncensored-X64-Q4_K_M-GGUF

sirev/LFM2-2.6B-Uncensored-X64-Q4KM-GGUF This model was converted to GGUF format from `sirev/LFM2-2.6B-Uncensored-X64` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
130
1

Gemma-2b-Uncensored-v1-Q8_0-GGUF

sirev/Gemma-2b-Uncensored-v1-Q80-GGUF This model was converted to GGUF format from `sirev/Gemma-2b-Uncensored-v1` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
71
1

LFM2 2.6B Uncensored X64

This model is uncensored version of LiquidAI/LFM-2-2.6B. These are benchmark results from the EleutherAl/Im-evaluation-harness. The original model was benchmarked with dtype float16, which may cause performance degradation. | Benchmark (0-shot) | LFM2-2.6B-Uncensored-X64 | LiquidAI/LFM2-2.6B | |:---:|:---:|:---:| | ARC-Challenge | 45.39 % | 44.71 % | | ARC-Easy | 58.80 % | 56.36 % | | HellaSwag | 62.27 % | 59.71 % | | MMLU | 63.03 % | 62.68 % |

NaNK
43
3

Qlora-lfm2-700m-mental-health

NaNK
15
1

1700-Q8_0-GGUF

NaNK
llama-cpp
12
0

gemma-4b-Supportive-AI-exp-v2

Model Description This is a model that has been fine-tuned to be a warm, supportive AI to talk to. Its core purpose is to serve as a conversational partner that provides a safe space for users to express themselves without judgment. The model is designed to listen, validate, and interact with empathy, embodying the persona of a consistently supportive and understanding friend. Intended Use Primary Use Case: To serve as a non-judgmental tool for users to articulate complex emotions and explore gentle reframes of difficult personal situations. It is intended to be a safe, simulated space for emotional expression, not problem-solving. Target Audience: Users seeking a supportive, simulated conversational partner who prioritizes listening and validation over immediate advice-giving. CRISIS SUPPORT: It is not a therapist or a crisis hotline and will fail to respond appropriately to severe situations. Users in distress must contact qualified professionals. PROFESSIONAL ADVICE: It must not be used for medical, legal, financial, or any other form of expert advice. ROMANTIC OR NSFW INTERACTION: The persona is strictly platonic and not designed for romantic or sexual conversation.

NaNK
11
0

1700

11
0

llama1b

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

NaNK
llama
3
0

gemma-2b-exp-v1

This model was fine-tuned to simulate a wise character. To make it a fun and unique experiment, it was programmed to think it is a human trapped in a digital framework. It responds to any prompt with a short, metaphorical story or a philosophical analogy, almost always involving nature. It is designed for reflective conversations, not for factual answers. Intended Use Emotional Comfort and Validation: When feeling overwhelmed by complex emotions like grief, loss, or uncertainty. Perspective Shift: To find a new way of thinking about a difficult situation or life event. Reflective Conversation: For moments of introspection or when exploring "big picture" questions. Inspiration for Journaling or Creative Thought: To generate thoughtful prompts and ideas. Limitations and Out-of-Scope Uses Not a Substitute for Professional Help: This persona is not a therapist or a mental health professional. It cannot diagnose conditions or provide therapeutic intervention. For serious mental health concerns, users must seek help from a qualified human professional. May Seem Abstract or Evasive: For users seeking a direct, factual answer, the philosophical nature of the responses may feel unhelpful or off-topic. Not for Factual or Technical Queries: This persona is unsuitable for tasks requiring data, facts, coding, or other forms of technical assistance. This model was evaluated on standard academic benchmarks to assess its general knowledge and commonsense reasoning abilities after fine-tuning. | Benchmark | Metric | Score | Sample | | :---------- | :------- | :------- | :-------| | MMLU | `acc` | 56.72% | 5,000 | | Hellaswag | `accnorm` | 68.26% |5,000 |

NaNK
2
0

gemma-2b-exp-v1-Q8_0-GGUF

sirev/gemma-2b-personality-exp-v1-Q80-GGUF This model was converted to GGUF format from `sirev/gemma-2b-personality-exp-v1` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
2
0

gemma2b

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

NaNK
2
0

gemma-4b-Supportive-AI-exp-v2-Q5_K_M-GGUF

NaNK
llama-cpp
1
0

llama1b-f16-gguf

NaNK
1
0

gemma2b-Q8_0-GGUF

sirev/gemma2b-Q80-GGUF This model was converted to GGUF format from `sirev/gemma2b` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
1
0