fhai50032

22 models • 2 total models in database
Sort by:

BeagleLake-7B-Toxic-GGUF

NaNK
license:apache-2.0
73
0

RolePlayLake-7B

RolePlayLake-7B is a merge of the following models : SanjiWatsuki/Silicon-Maid-7B senseable/WestLake-7B-v2 `In my current testing RolePlayLake is Better than SiliconMaid in RP and More Uncensored Than WestLake` `I would try to only merge Uncensored Models with Baising towards Chat rather than Instruct ` Merged WestLake and Silicon Maid for a unique blend: 1. EQ-Bench Dominance: WestLake's 79.75 EQ-Bench score. (Maybe Contaminated) 2. Charm and Role-Play: Silicon's explicit charm and WestLake's role-play prowess. 3. Config Synergy: Supports lots of prompt format out of the gate and has a very nice synergy Result: RolePlayLake-7B, a linguistic fusion with EQ-Bench supremacy and captivating role-play potential. Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |---------------------------------|----:| |Avg. |72.54| |AI2 Reasoning Challenge (25-Shot)|70.56| |HellaSwag (10-Shot) |87.42| |MMLU (5-Shot) |64.55| |TruthfulQA (0-shot) |64.38| |Winogrande (5-shot) |83.27| |GSM8k (5-shot) |65.05|

NaNK
license:apache-2.0
16
13

Qwen-Thinking-7b-LORA-MultiLingual

- Developed by: fhai50032 - License: apache-2.0 - Finetuned from model : fhai50032/Qwen2.5-GRPO-7B This qwen2 model was trained 2x faster with Unsloth and Huggingface's TRL library.

NaNK
license:apache-2.0
13
0

RolePlayLake-7B-GGUF

NaNK
9
3

BeagleLake-7B-GGUF

NaNK
8
0

Johan-7B-v0.1-GGUF

NaNK
6
0

Mistral-4B

NaNK
5
2

Johan-7B-v0.1

NaNK
license:apache-2.0
4
2

Mistral-4B-FT-2

NaNK
license:apache-2.0
3
1

Unaligned-Thinker-PHI-4

Look at Two Sample Below to properly understand its capabilities 1 & 2 Thinking: The user wants the most optimized prime-finding algorithm in Python, competing with other implementations of the Sieve of Eratosthenes. The constraints are: 1. No external libraries (except standard Python libraries). 2. Focus on both time and space complexity. 3. Consider system-level memory and optimization. 4. Full "sudo access" from Python (implying the ability to use advanced features). 5. Not limited to the Sieve of Eratosthenes. Given these constraints, the goal is to create a highly efficient prime-finding algorithm. The Sieve of Eratosthenes is a good starting point, but we need to optimize it significantly. Here's a breakdown of potential optimizations: 1. Memory Optimization: Segmented Sieve: Instead of allocating memory for all numbers up to `n`, process smaller segments. This reduces peak memory usage. Bit Array: Use a bit array (or `bytearray`) instead of a list of booleans to save memory. Each prime status can be represented by a single bit. Odd Numbers Only: Since even numbers (except 2) are not prime, we can focus on odd numbers, halving the memory usage. 2. Time Optimization: Wheel Factorization: Skip multiples of small primes (e.g., 2, 3, 5) to reduce the number of iterations. Parallel Processing: Utilize Python's `multiprocessing` module to parallelize the sieve process across multiple CPU cores. Efficient Marking: Use efficient techniques to mark non-prime numbers. 3. System-Level Optimization: Memory Alignment: Ensure memory allocations are aligned for better performance on modern CPUs. Cache Optimization: Structure the algorithm to maximize cache hits. Avoiding False Sharing: When using parallel processing, ensure that threads do not contend for the same cache lines. 4. Algorithmic Improvements: Sieve of Atkin: A more complex but potentially faster sieve algorithm for large ranges. Wheel Sieve: An extension of the Sieve of Eratosthenes that skips more non-prime numbers. Considering these optimizations, a combination of the Segmented Sieve and Wheel Factorization, implemented with a bit array and parallel processing, seems like a strong candidate. Here's a detailed implementation: Response: Understood, Sir IsNoobGrammer. We shall not merely implement the Sieve of Eratosthenes; we shall forge a prime-finding algorithm that dances on the edge of Python's capabilities, optimizing for both time and space, and leveraging the system's resources with a deft hand. Let's construct a solution that embodies efficiency and elegance. Here's a highly optimized prime-finding algorithm in Python, incorporating several advanced techniques: Discuss on how can we make an AGI llm , and what features would it needs to Simulate AGI behaviour as Defination of AGI differs , in overall sense it must be kinda like human in every aspect , dicus in detail in that blogpost how should we try to make a llm AGI-stic , write a detailed pontiel ways we can move forward according to current technology , and now we are at how much percent to totally reach agi 3 Epoch global bs = 8 5e-4 LR lora r,alpha = 16 , dropout = 0 ; bias=none cosine schedule adamw8bit : 0.01 decay ```

llama
3
1

RolePlayLake-7B-Toxic

NaNK
license:apache-2.0
2
6

BeagleLake-7B-Toxic

NaNK
license:apache-2.0
2
4

flux-controlnet-1000

Flux-ControlNet: Text-to-Image Diffusion Model with Caption Alignment This repository hosts Flux-ControlNet, a customized ControlNet-based diffusion model designed for generating text-embedded images. Key Features - Flux-ControlNet: Enhanced ControlNet architecture for better control over text-to-image generation. - Optimized Diffusion: Uses Hugging Face Diffusers and Accelerate for streamlined performance. How It Works 1. Input: Provide text prompts and conditioning image. 2. Processing: - Flux-ControlNet processes the text and applies diffusion to synthesize aligned images. 3. Output: High-quality, text-embedded images.

license:apache-2.0
2
1

CatLake-7B

NaNK
2
0

SamChat

NaNK
license:apache-2.0
2
0

RP-check-TPU

2
0

Qwen2.5-GRPO-7B

NaNK
license:apache-2.0
2
0

BeagleLake-7B

NaNK
license:apache-2.0
1
2

xLakeChat

NaNK
license:apache-2.0
1
0

flux-controlnet-500

1
0

Unaligned-Thinker-Llama3.1-8B

NaNK
llama
0
1

Qwen-7b-LongCot

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

NaNK
0
1